2025-06-02 12:33:48.638001 | Job console starting 2025-06-02 12:33:48.646921 | Updating git repos 2025-06-02 12:33:48.716702 | Cloning repos into workspace 2025-06-02 12:33:48.892792 | Restoring repo states 2025-06-02 12:33:48.917086 | Merging changes 2025-06-02 12:33:48.917106 | Checking out repos 2025-06-02 12:33:49.145898 | Preparing playbooks 2025-06-02 12:33:49.820135 | Running Ansible setup 2025-06-02 12:33:54.151308 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 12:33:54.905641 | 2025-06-02 12:33:54.905815 | PLAY [Base pre] 2025-06-02 12:33:54.923076 | 2025-06-02 12:33:54.923221 | TASK [Setup log path fact] 2025-06-02 12:33:54.953627 | orchestrator | ok 2025-06-02 12:33:54.970970 | 2025-06-02 12:33:54.971225 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 12:33:55.013397 | orchestrator | ok 2025-06-02 12:33:55.026942 | 2025-06-02 12:33:55.027084 | TASK [emit-job-header : Print job information] 2025-06-02 12:33:55.082773 | # Job Information 2025-06-02 12:33:55.083091 | Ansible Version: 2.16.14 2025-06-02 12:33:55.083149 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-02 12:33:55.083199 | Pipeline: post 2025-06-02 12:33:55.083233 | Executor: 521e9411259a 2025-06-02 12:33:55.083264 | Triggered by: https://github.com/osism/testbed/commit/5813b17ae086a94a05f5680616379ffb7585bf19 2025-06-02 12:33:55.083294 | Event ID: d0e18cf0-3fad-11f0-9fc6-74ef1215d406 2025-06-02 12:33:55.091819 | 2025-06-02 12:33:55.091953 | LOOP [emit-job-header : Print node information] 2025-06-02 12:33:55.204254 | orchestrator | ok: 2025-06-02 12:33:55.204507 | orchestrator | # Node Information 2025-06-02 12:33:55.204542 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 12:33:55.204566 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 12:33:55.204587 | orchestrator | Username: zuul-testbed05 2025-06-02 12:33:55.204607 | orchestrator | Distro: Debian 12.11 2025-06-02 12:33:55.204630 | orchestrator | Provider: static-testbed 2025-06-02 12:33:55.204651 | orchestrator | Region: 2025-06-02 12:33:55.204670 | orchestrator | Label: testbed-orchestrator 2025-06-02 12:33:55.204690 | orchestrator | Product Name: OpenStack Nova 2025-06-02 12:33:55.204709 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 12:33:55.224256 | 2025-06-02 12:33:55.224403 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 12:33:55.707127 | orchestrator -> localhost | changed 2025-06-02 12:33:55.715730 | 2025-06-02 12:33:55.715861 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 12:33:56.763065 | orchestrator -> localhost | changed 2025-06-02 12:33:56.777406 | 2025-06-02 12:33:56.777533 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 12:33:57.055015 | orchestrator -> localhost | ok 2025-06-02 12:33:57.067348 | 2025-06-02 12:33:57.067521 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 12:33:57.089837 | orchestrator | ok 2025-06-02 12:33:57.108845 | orchestrator | included: /var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 12:33:57.117404 | 2025-06-02 12:33:57.117531 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 12:33:58.579018 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 12:33:58.579268 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/1c910ada1ace424a8673b485a079b076_id_rsa 2025-06-02 12:33:58.579308 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/1c910ada1ace424a8673b485a079b076_id_rsa.pub 2025-06-02 12:33:58.579334 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 12:33:58.579359 | orchestrator -> localhost | SHA256:vwcXpDw8dj2HrXBcDBrKyAt0h/TahnEFj521LgKBmTM zuul-build-sshkey 2025-06-02 12:33:58.579381 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 12:33:58.579421 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 12:33:58.579444 | orchestrator -> localhost | | .=+.o.o oo | 2025-06-02 12:33:58.579466 | orchestrator -> localhost | | .Eo.* *.= .o| 2025-06-02 12:33:58.579486 | orchestrator -> localhost | | .o=o*o=o.+ | 2025-06-02 12:33:58.579506 | orchestrator -> localhost | | . OB +.* o| 2025-06-02 12:33:58.579525 | orchestrator -> localhost | | S.++.+.+ | 2025-06-02 12:33:58.579548 | orchestrator -> localhost | | o..... | 2025-06-02 12:33:58.579568 | orchestrator -> localhost | | .o | 2025-06-02 12:33:58.579588 | orchestrator -> localhost | | .. | 2025-06-02 12:33:58.579608 | orchestrator -> localhost | | .. | 2025-06-02 12:33:58.579628 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 12:33:58.579686 | orchestrator -> localhost | ok: Runtime: 0:00:00.967570 2025-06-02 12:33:58.587343 | 2025-06-02 12:33:58.587462 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 12:33:58.607354 | orchestrator | ok 2025-06-02 12:33:58.618192 | orchestrator | included: /var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 12:33:58.628145 | 2025-06-02 12:33:58.628263 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 12:33:58.643118 | orchestrator | skipping: Conditional result was False 2025-06-02 12:33:58.661917 | 2025-06-02 12:33:58.662137 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 12:33:59.283477 | orchestrator | changed 2025-06-02 12:33:59.290137 | 2025-06-02 12:33:59.290249 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 12:33:59.561161 | orchestrator | ok 2025-06-02 12:33:59.570028 | 2025-06-02 12:33:59.570171 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 12:34:00.018565 | orchestrator | ok 2025-06-02 12:34:00.027398 | 2025-06-02 12:34:00.027539 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 12:34:00.454642 | orchestrator | ok 2025-06-02 12:34:00.463931 | 2025-06-02 12:34:00.464126 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 12:34:00.488428 | orchestrator | skipping: Conditional result was False 2025-06-02 12:34:00.496430 | 2025-06-02 12:34:00.496557 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 12:34:00.947573 | orchestrator -> localhost | changed 2025-06-02 12:34:00.966513 | 2025-06-02 12:34:00.966632 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 12:34:01.314268 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/1c910ada1ace424a8673b485a079b076_id_rsa (zuul-build-sshkey) 2025-06-02 12:34:01.314513 | orchestrator -> localhost | ok: Runtime: 0:00:00.019480 2025-06-02 12:34:01.322069 | 2025-06-02 12:34:01.322187 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 12:34:01.743683 | orchestrator | ok 2025-06-02 12:34:01.752552 | 2025-06-02 12:34:01.752683 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 12:34:01.787096 | orchestrator | skipping: Conditional result was False 2025-06-02 12:34:01.854432 | 2025-06-02 12:34:01.854575 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 12:34:02.293818 | orchestrator | ok 2025-06-02 12:34:02.310055 | 2025-06-02 12:34:02.310192 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 12:34:02.339624 | orchestrator | ok 2025-06-02 12:34:02.348553 | 2025-06-02 12:34:02.348669 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 12:34:02.646703 | orchestrator -> localhost | ok 2025-06-02 12:34:02.668817 | 2025-06-02 12:34:02.669136 | TASK [validate-host : Collect information about the host] 2025-06-02 12:34:03.814583 | orchestrator | ok 2025-06-02 12:34:03.835558 | 2025-06-02 12:34:03.835706 | TASK [validate-host : Sanitize hostname] 2025-06-02 12:34:03.894675 | orchestrator | ok 2025-06-02 12:34:03.900268 | 2025-06-02 12:34:03.900382 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 12:34:04.462519 | orchestrator -> localhost | changed 2025-06-02 12:34:04.475912 | 2025-06-02 12:34:04.476152 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 12:34:04.929142 | orchestrator | ok 2025-06-02 12:34:04.936904 | 2025-06-02 12:34:04.937058 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 12:34:05.541101 | orchestrator -> localhost | changed 2025-06-02 12:34:05.574188 | 2025-06-02 12:34:05.574368 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 12:34:05.870445 | orchestrator | ok 2025-06-02 12:34:05.880672 | 2025-06-02 12:34:05.880833 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 12:34:39.688210 | orchestrator | changed: 2025-06-02 12:34:39.688534 | orchestrator | .d..t...... src/ 2025-06-02 12:34:39.688592 | orchestrator | .d..t...... src/github.com/ 2025-06-02 12:34:39.688633 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 12:34:39.688667 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 12:34:39.688699 | orchestrator | RedHat.yml 2025-06-02 12:34:39.703187 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 12:34:39.703206 | orchestrator | RedHat.yml 2025-06-02 12:34:39.703263 | orchestrator | = 1.53.0"... 2025-06-02 12:34:52.492464 | orchestrator | 12:34:52.492 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-02 12:34:53.824096 | orchestrator | 12:34:53.823 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 12:34:54.822667 | orchestrator | 12:34:54.822 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 12:34:55.805399 | orchestrator | 12:34:55.805 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 12:34:56.706308 | orchestrator | 12:34:56.705 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 12:34:57.964271 | orchestrator | 12:34:57.964 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 12:34:59.357381 | orchestrator | 12:34:59.357 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 12:34:59.357464 | orchestrator | 12:34:59.357 STDOUT terraform: Providers are signed by their developers. 2025-06-02 12:34:59.357483 | orchestrator | 12:34:59.357 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 12:34:59.357489 | orchestrator | 12:34:59.357 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 12:34:59.357494 | orchestrator | 12:34:59.357 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 12:34:59.357567 | orchestrator | 12:34:59.357 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 12:34:59.357605 | orchestrator | 12:34:59.357 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 12:34:59.357631 | orchestrator | 12:34:59.357 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 12:34:59.357969 | orchestrator | 12:34:59.357 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 12:34:59.357984 | orchestrator | 12:34:59.357 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 12:34:59.358008 | orchestrator | 12:34:59.357 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 12:34:59.358032 | orchestrator | 12:34:59.357 STDOUT terraform: should now work. 2025-06-02 12:34:59.358089 | orchestrator | 12:34:59.358 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 12:34:59.358151 | orchestrator | 12:34:59.358 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 12:34:59.358209 | orchestrator | 12:34:59.358 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 12:34:59.552214 | orchestrator | 12:34:59.551 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 12:34:59.819601 | orchestrator | 12:34:59.819 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 12:34:59.819731 | orchestrator | 12:34:59.819 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 12:34:59.819748 | orchestrator | 12:34:59.819 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 12:34:59.819776 | orchestrator | 12:34:59.819 STDOUT terraform: for this configuration. 2025-06-02 12:35:00.031726 | orchestrator | 12:35:00.031 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 12:35:00.134501 | orchestrator | 12:35:00.134 STDOUT terraform: ci.auto.tfvars 2025-06-02 12:35:00.138119 | orchestrator | 12:35:00.138 STDOUT terraform: default_custom.tf 2025-06-02 12:35:00.324900 | orchestrator | 12:35:00.324 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-06-02 12:35:01.227472 | orchestrator | 12:35:01.225 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 12:35:01.775090 | orchestrator | 12:35:01.774 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 12:35:02.035833 | orchestrator | 12:35:02.035 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 12:35:02.035922 | orchestrator | 12:35:02.035 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 12:35:02.035929 | orchestrator | 12:35:02.035 STDOUT terraform:  + create 2025-06-02 12:35:02.035935 | orchestrator | 12:35:02.035 STDOUT terraform:  <= read (data resources) 2025-06-02 12:35:02.035941 | orchestrator | 12:35:02.035 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 12:35:02.036008 | orchestrator | 12:35:02.035 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 12:35:02.036017 | orchestrator | 12:35:02.035 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 12:35:02.036062 | orchestrator | 12:35:02.036 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 12:35:02.036159 | orchestrator | 12:35:02.036 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 12:35:02.036170 | orchestrator | 12:35:02.036 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 12:35:02.036174 | orchestrator | 12:35:02.036 STDOUT terraform:  + file = (known after apply) 2025-06-02 12:35:02.036180 | orchestrator | 12:35:02.036 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.036272 | orchestrator | 12:35:02.036 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.036279 | orchestrator | 12:35:02.036 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 12:35:02.036308 | orchestrator | 12:35:02.036 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 12:35:02.036315 | orchestrator | 12:35:02.036 STDOUT terraform:  + most_recent = true 2025-06-02 12:35:02.036380 | orchestrator | 12:35:02.036 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.036393 | orchestrator | 12:35:02.036 STDOUT terraform:  + protected = (known after apply) 2025-06-02 12:35:02.036425 | orchestrator | 12:35:02.036 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.036455 | orchestrator | 12:35:02.036 STDOUT terraform:  + schema = (known after apply) 2025-06-02 12:35:02.036486 | orchestrator | 12:35:02.036 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 12:35:02.036524 | orchestrator | 12:35:02.036 STDOUT terraform:  + tags = (known after apply) 2025-06-02 12:35:02.036563 | orchestrator | 12:35:02.036 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 12:35:02.036575 | orchestrator | 12:35:02.036 STDOUT terraform:  } 2025-06-02 12:35:02.036662 | orchestrator | 12:35:02.036 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 12:35:02.036667 | orchestrator | 12:35:02.036 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 12:35:02.036740 | orchestrator | 12:35:02.036 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 12:35:02.036750 | orchestrator | 12:35:02.036 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 12:35:02.036756 | orchestrator | 12:35:02.036 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 12:35:02.036831 | orchestrator | 12:35:02.036 STDOUT terraform:  + file = (known after apply) 2025-06-02 12:35:02.036837 | orchestrator | 12:35:02.036 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.036884 | orchestrator | 12:35:02.036 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.036891 | orchestrator | 12:35:02.036 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 12:35:02.036964 | orchestrator | 12:35:02.036 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 12:35:02.036970 | orchestrator | 12:35:02.036 STDOUT terraform:  + most_recent = true 2025-06-02 12:35:02.036975 | orchestrator | 12:35:02.036 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.037057 | orchestrator | 12:35:02.036 STDOUT terraform:  + protected = (known after apply) 2025-06-02 12:35:02.037066 | orchestrator | 12:35:02.037 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.037071 | orchestrator | 12:35:02.037 STDOUT terraform:  + schema = (known after apply) 2025-06-02 12:35:02.037106 | orchestrator | 12:35:02.037 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 12:35:02.037166 | orchestrator | 12:35:02.037 STDOUT terraform:  + tags = (known after apply) 2025-06-02 12:35:02.037175 | orchestrator | 12:35:02.037 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 12:35:02.037179 | orchestrator | 12:35:02.037 STDOUT terraform:  } 2025-06-02 12:35:02.037269 | orchestrator | 12:35:02.037 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 12:35:02.037399 | orchestrator | 12:35:02.037 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 12:35:02.037405 | orchestrator | 12:35:02.037 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:02.037409 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:02.037421 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:02.037489 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:02.037501 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:02.037576 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:02.037584 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:02.037623 | orchestrator | 12:35:02.037 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:02.037681 | orchestrator | 12:35:02.037 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:02.037691 | orchestrator | 12:35:02.037 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 12:35:02.037806 | orchestrator | 12:35:02.037 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.037812 | orchestrator | 12:35:02.037 STDOUT terraform:  } 2025-06-02 12:35:02.037816 | orchestrator | 12:35:02.037 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 12:35:02.037824 | orchestrator | 12:35:02.037 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 12:35:02.037830 | orchestrator | 12:35:02.037 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:02.037912 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:02.037923 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:02.037962 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:02.038046 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:02.038060 | orchestrator | 12:35:02.037 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:02.038147 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:02.038153 | orchestrator | 12:35:02.038 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:02.038237 | orchestrator | 12:35:02.038 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:02.038242 | orchestrator | 12:35:02.038 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 12:35:02.038317 | orchestrator | 12:35:02.038 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.038322 | orchestrator | 12:35:02.038 STDOUT terraform:  } 2025-06-02 12:35:02.038331 | orchestrator | 12:35:02.038 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 12:35:02.038337 | orchestrator | 12:35:02.038 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 12:35:02.038383 | orchestrator | 12:35:02.038 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:02.038459 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:02.038466 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:02.038560 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:02.038568 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:02.038579 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:02.038632 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:02.038662 | orchestrator | 12:35:02.038 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:02.038699 | orchestrator | 12:35:02.038 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:02.038730 | orchestrator | 12:35:02.038 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 12:35:02.038785 | orchestrator | 12:35:02.038 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.038790 | orchestrator | 12:35:02.038 STDOUT terraform:  } 2025-06-02 12:35:02.038844 | orchestrator | 12:35:02.038 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 12:35:02.038852 | orchestrator | 12:35:02.038 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 12:35:02.038896 | orchestrator | 12:35:02.038 STDOUT terraform:  + content = (sensitive value) 2025-06-02 12:35:02.038927 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:02.038988 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:02.039030 | orchestrator | 12:35:02.038 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:02.039057 | orchestrator | 12:35:02.039 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:02.039149 | orchestrator | 12:35:02.039 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:02.039155 | orchestrator | 12:35:02.039 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:02.039160 | orchestrator | 12:35:02.039 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 12:35:02.039218 | orchestrator | 12:35:02.039 STDOUT terraform:  + file_permission = "0600" 2025-06-02 12:35:02.039226 | orchestrator | 12:35:02.039 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 12:35:02.039283 | orchestrator | 12:35:02.039 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.039288 | orchestrator | 12:35:02.039 STDOUT terraform:  } 2025-06-02 12:35:02.039321 | orchestrator | 12:35:02.039 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 12:35:02.039356 | orchestrator | 12:35:02.039 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 12:35:02.039389 | orchestrator | 12:35:02.039 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.039394 | orchestrator | 12:35:02.039 STDOUT terraform:  } 2025-06-02 12:35:02.039463 | orchestrator | 12:35:02.039 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 12:35:02.039544 | orchestrator | 12:35:02.039 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 12:35:02.039552 | orchestrator | 12:35:02.039 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.039580 | orchestrator | 12:35:02.039 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.039626 | orchestrator | 12:35:02.039 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.039734 | orchestrator | 12:35:02.039 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.039745 | orchestrator | 12:35:02.039 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.039751 | orchestrator | 12:35:02.039 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 12:35:02.039813 | orchestrator | 12:35:02.039 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.039821 | orchestrator | 12:35:02.039 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.039852 | orchestrator | 12:35:02.039 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.039885 | orchestrator | 12:35:02.039 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.039891 | orchestrator | 12:35:02.039 STDOUT terraform:  } 2025-06-02 12:35:02.039942 | orchestrator | 12:35:02.039 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 12:35:02.040001 | orchestrator | 12:35:02.039 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.040099 | orchestrator | 12:35:02.039 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.040104 | orchestrator | 12:35:02.040 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.040110 | orchestrator | 12:35:02.040 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.040140 | orchestrator | 12:35:02.040 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.040218 | orchestrator | 12:35:02.040 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.040271 | orchestrator | 12:35:02.040 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 12:35:02.040356 | orchestrator | 12:35:02.040 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.040362 | orchestrator | 12:35:02.040 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.040366 | orchestrator | 12:35:02.040 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.040372 | orchestrator | 12:35:02.040 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.040395 | orchestrator | 12:35:02.040 STDOUT terraform:  } 2025-06-02 12:35:02.040460 | orchestrator | 12:35:02.040 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 12:35:02.040513 | orchestrator | 12:35:02.040 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.040561 | orchestrator | 12:35:02.040 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.040568 | orchestrator | 12:35:02.040 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.040687 | orchestrator | 12:35:02.040 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.040693 | orchestrator | 12:35:02.040 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.040703 | orchestrator | 12:35:02.040 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.040760 | orchestrator | 12:35:02.040 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 12:35:02.040870 | orchestrator | 12:35:02.040 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.040876 | orchestrator | 12:35:02.040 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.040880 | orchestrator | 12:35:02.040 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.040886 | orchestrator | 12:35:02.040 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.040891 | orchestrator | 12:35:02.040 STDOUT terraform:  } 2025-06-02 12:35:02.040982 | orchestrator | 12:35:02.040 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 12:35:02.041007 | orchestrator | 12:35:02.040 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.041062 | orchestrator | 12:35:02.041 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.041069 | orchestrator | 12:35:02.041 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.041120 | orchestrator | 12:35:02.041 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.041219 | orchestrator | 12:35:02.041 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.041226 | orchestrator | 12:35:02.041 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.041300 | orchestrator | 12:35:02.041 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 12:35:02.041308 | orchestrator | 12:35:02.041 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.041334 | orchestrator | 12:35:02.041 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.041364 | orchestrator | 12:35:02.041 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.041389 | orchestrator | 12:35:02.041 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.041396 | orchestrator | 12:35:02.041 STDOUT terraform:  } 2025-06-02 12:35:02.041457 | orchestrator | 12:35:02.041 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 12:35:02.041518 | orchestrator | 12:35:02.041 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.041575 | orchestrator | 12:35:02.041 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.041580 | orchestrator | 12:35:02.041 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.041644 | orchestrator | 12:35:02.041 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.041652 | orchestrator | 12:35:02.041 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.041702 | orchestrator | 12:35:02.041 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.041753 | orchestrator | 12:35:02.041 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 12:35:02.041843 | orchestrator | 12:35:02.041 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.041857 | orchestrator | 12:35:02.041 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.041861 | orchestrator | 12:35:02.041 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.041867 | orchestrator | 12:35:02.041 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.041872 | orchestrator | 12:35:02.041 STDOUT terraform:  } 2025-06-02 12:35:02.041937 | orchestrator | 12:35:02.041 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 12:35:02.042053 | orchestrator | 12:35:02.041 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.042070 | orchestrator | 12:35:02.041 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.042076 | orchestrator | 12:35:02.042 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.042538 | orchestrator | 12:35:02.042 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.042617 | orchestrator | 12:35:02.042 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.042622 | orchestrator | 12:35:02.042 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.042731 | orchestrator | 12:35:02.042 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 12:35:02.042737 | orchestrator | 12:35:02.042 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.042741 | orchestrator | 12:35:02.042 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.042758 | orchestrator | 12:35:02.042 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.042838 | orchestrator | 12:35:02.042 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.042849 | orchestrator | 12:35:02.042 STDOUT terraform:  } 2025-06-02 12:35:02.042889 | orchestrator | 12:35:02.042 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 12:35:02.042942 | orchestrator | 12:35:02.042 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:02.043051 | orchestrator | 12:35:02.042 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.043056 | orchestrator | 12:35:02.042 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.043060 | orchestrator | 12:35:02.042 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.043091 | orchestrator | 12:35:02.043 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.043132 | orchestrator | 12:35:02.043 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.043242 | orchestrator | 12:35:02.043 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 12:35:02.043332 | orchestrator | 12:35:02.043 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.043479 | orchestrator | 12:35:02.043 STDOUT terraform:  + size = 80 2025-06-02 12:35:02.043511 | orchestrator | 12:35:02.043 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.043551 | orchestrator | 12:35:02.043 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.043562 | orchestrator | 12:35:02.043 STDOUT terraform:  } 2025-06-02 12:35:02.043610 | orchestrator | 12:35:02.043 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 12:35:02.043666 | orchestrator | 12:35:02.043 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.043712 | orchestrator | 12:35:02.043 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.043722 | orchestrator | 12:35:02.043 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.043857 | orchestrator | 12:35:02.043 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.043862 | orchestrator | 12:35:02.043 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.043868 | orchestrator | 12:35:02.043 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 12:35:02.043924 | orchestrator | 12:35:02.043 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.043931 | orchestrator | 12:35:02.043 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.044002 | orchestrator | 12:35:02.043 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.044007 | orchestrator | 12:35:02.043 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.044016 | orchestrator | 12:35:02.043 STDOUT terraform:  } 2025-06-02 12:35:02.044050 | orchestrator | 12:35:02.043 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 12:35:02.044108 | orchestrator | 12:35:02.044 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.044153 | orchestrator | 12:35:02.044 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.044160 | orchestrator | 12:35:02.044 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.044223 | orchestrator | 12:35:02.044 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.044266 | orchestrator | 12:35:02.044 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.044311 | orchestrator | 12:35:02.044 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 12:35:02.044398 | orchestrator | 12:35:02.044 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.044404 | orchestrator | 12:35:02.044 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.044410 | orchestrator | 12:35:02.044 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.044433 | orchestrator | 12:35:02.044 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.044440 | orchestrator | 12:35:02.044 STDOUT terraform:  } 2025-06-02 12:35:02.044506 | orchestrator | 12:35:02.044 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 12:35:02.044561 | orchestrator | 12:35:02.044 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.044592 | orchestrator | 12:35:02.044 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.044624 | orchestrator | 12:35:02.044 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.044677 | orchestrator | 12:35:02.044 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.044822 | orchestrator | 12:35:02.044 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.044828 | orchestrator | 12:35:02.044 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 12:35:02.044832 | orchestrator | 12:35:02.044 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.044835 | orchestrator | 12:35:02.044 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.044841 | orchestrator | 12:35:02.044 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.044847 | orchestrator | 12:35:02.044 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.044870 | orchestrator | 12:35:02.044 STDOUT terraform:  } 2025-06-02 12:35:02.044953 | orchestrator | 12:35:02.044 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 12:35:02.045008 | orchestrator | 12:35:02.044 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.045016 | orchestrator | 12:35:02.044 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.045046 | orchestrator | 12:35:02.045 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.045109 | orchestrator | 12:35:02.045 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.045231 | orchestrator | 12:35:02.045 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.045237 | orchestrator | 12:35:02.045 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 12:35:02.045241 | orchestrator | 12:35:02.045 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.045246 | orchestrator | 12:35:02.045 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.045300 | orchestrator | 12:35:02.045 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.045305 | orchestrator | 12:35:02.045 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.045311 | orchestrator | 12:35:02.045 STDOUT terraform:  } 2025-06-02 12:35:02.045355 | orchestrator | 12:35:02.045 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 12:35:02.045452 | orchestrator | 12:35:02.045 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.045462 | orchestrator | 12:35:02.045 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.045468 | orchestrator | 12:35:02.045 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.045520 | orchestrator | 12:35:02.045 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.045564 | orchestrator | 12:35:02.045 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.045623 | orchestrator | 12:35:02.045 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 12:35:02.045635 | orchestrator | 12:35:02.045 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.045667 | orchestrator | 12:35:02.045 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.045700 | orchestrator | 12:35:02.045 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.045713 | orchestrator | 12:35:02.045 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.045724 | orchestrator | 12:35:02.045 STDOUT terraform:  } 2025-06-02 12:35:02.045824 | orchestrator | 12:35:02.045 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 12:35:02.045831 | orchestrator | 12:35:02.045 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.045876 | orchestrator | 12:35:02.045 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.045934 | orchestrator | 12:35:02.045 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.045941 | orchestrator | 12:35:02.045 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.046037 | orchestrator | 12:35:02.045 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.046050 | orchestrator | 12:35:02.045 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 12:35:02.046123 | orchestrator | 12:35:02.046 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.046129 | orchestrator | 12:35:02.046 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.046134 | orchestrator | 12:35:02.046 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.046187 | orchestrator | 12:35:02.046 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.046207 | orchestrator | 12:35:02.046 STDOUT terraform:  } 2025-06-02 12:35:02.046373 | orchestrator | 12:35:02.046 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 12:35:02.046378 | orchestrator | 12:35:02.046 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.046382 | orchestrator | 12:35:02.046 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.046386 | orchestrator | 12:35:02.046 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.046391 | orchestrator | 12:35:02.046 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.046442 | orchestrator | 12:35:02.046 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.046493 | orchestrator | 12:35:02.046 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 12:35:02.046578 | orchestrator | 12:35:02.046 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.046584 | orchestrator | 12:35:02.046 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.046591 | orchestrator | 12:35:02.046 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.046597 | orchestrator | 12:35:02.046 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.046601 | orchestrator | 12:35:02.046 STDOUT terraform:  } 2025-06-02 12:35:02.046659 | orchestrator | 12:35:02.046 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 12:35:02.046711 | orchestrator | 12:35:02.046 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.046758 | orchestrator | 12:35:02.046 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.046766 | orchestrator | 12:35:02.046 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.046818 | orchestrator | 12:35:02.046 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.046866 | orchestrator | 12:35:02.046 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.047020 | orchestrator | 12:35:02.046 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 12:35:02.047662 | orchestrator | 12:35:02.046 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.047668 | orchestrator | 12:35:02.047 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.047672 | orchestrator | 12:35:02.047 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.047678 | orchestrator | 12:35:02.047 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.047707 | orchestrator | 12:35:02.047 STDOUT terraform:  } 2025-06-02 12:35:02.047762 | orchestrator | 12:35:02.047 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 12:35:02.047813 | orchestrator | 12:35:02.047 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:02.047845 | orchestrator | 12:35:02.047 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:02.047912 | orchestrator | 12:35:02.047 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.047922 | orchestrator | 12:35:02.047 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.047977 | orchestrator | 12:35:02.047 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:02.048139 | orchestrator | 12:35:02.047 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 12:35:02.048183 | orchestrator | 12:35:02.048 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.048221 | orchestrator | 12:35:02.048 STDOUT terraform:  + size = 20 2025-06-02 12:35:02.048297 | orchestrator | 12:35:02.048 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:02.048302 | orchestrator | 12:35:02.048 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:02.048306 | orchestrator | 12:35:02.048 STDOUT terraform:  } 2025-06-02 12:35:02.048434 | orchestrator | 12:35:02.048 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 12:35:02.048440 | orchestrator | 12:35:02.048 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 12:35:02.048444 | orchestrator | 12:35:02.048 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.048449 | orchestrator | 12:35:02.048 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.048497 | orchestrator | 12:35:02.048 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.048554 | orchestrator | 12:35:02.048 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.048561 | orchestrator | 12:35:02.048 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.048592 | orchestrator | 12:35:02.048 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.048628 | orchestrator | 12:35:02.048 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.048764 | orchestrator | 12:35:02.048 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.048775 | orchestrator | 12:35:02.048 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 12:35:02.048779 | orchestrator | 12:35:02.048 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.048783 | orchestrator | 12:35:02.048 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.048788 | orchestrator | 12:35:02.048 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.048843 | orchestrator | 12:35:02.048 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.048883 | orchestrator | 12:35:02.048 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.048971 | orchestrator | 12:35:02.048 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.048976 | orchestrator | 12:35:02.048 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 12:35:02.048980 | orchestrator | 12:35:02.048 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.049018 | orchestrator | 12:35:02.048 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.049078 | orchestrator | 12:35:02.049 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.049084 | orchestrator | 12:35:02.049 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.049120 | orchestrator | 12:35:02.049 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.049208 | orchestrator | 12:35:02.049 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 12:35:02.049214 | orchestrator | 12:35:02.049 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.049218 | orchestrator | 12:35:02.049 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.049239 | orchestrator | 12:35:02.049 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.049299 | orchestrator | 12:35:02.049 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.049306 | orchestrator | 12:35:02.049 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.049343 | orchestrator | 12:35:02.049 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.049400 | orchestrator | 12:35:02.049 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.049406 | orchestrator | 12:35:02.049 STDOUT terraform:  } 2025-06-02 12:35:02.049413 | orchestrator | 12:35:02.049 STDOUT terraform:  + network { 2025-06-02 12:35:02.049439 | orchestrator | 12:35:02.049 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.049470 | orchestrator | 12:35:02.049 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.049519 | orchestrator | 12:35:02.049 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.049553 | orchestrator | 12:35:02.049 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.049755 | orchestrator | 12:35:02.049 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.049760 | orchestrator | 12:35:02.049 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.049764 | orchestrator | 12:35:02.049 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.049772 | orchestrator | 12:35:02.049 STDOUT terraform:  } 2025-06-02 12:35:02.049777 | orchestrator | 12:35:02.049 STDOUT terraform:  } 2025-06-02 12:35:02.049781 | orchestrator | 12:35:02.049 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 12:35:02.049787 | orchestrator | 12:35:02.049 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.049793 | orchestrator | 12:35:02.049 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.049897 | orchestrator | 12:35:02.049 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.049902 | orchestrator | 12:35:02.049 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.049908 | orchestrator | 12:35:02.049 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.049946 | orchestrator | 12:35:02.049 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.049957 | orchestrator | 12:35:02.049 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.050033 | orchestrator | 12:35:02.049 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.050091 | orchestrator | 12:35:02.049 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.050100 | orchestrator | 12:35:02.050 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.050138 | orchestrator | 12:35:02.050 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.050171 | orchestrator | 12:35:02.050 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.050270 | orchestrator | 12:35:02.050 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.050278 | orchestrator | 12:35:02.050 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.050352 | orchestrator | 12:35:02.050 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.050361 | orchestrator | 12:35:02.050 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.050391 | orchestrator | 12:35:02.050 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 12:35:02.050478 | orchestrator | 12:35:02.050 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.050494 | orchestrator | 12:35:02.050 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.050499 | orchestrator | 12:35:02.050 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.050505 | orchestrator | 12:35:02.050 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.050641 | orchestrator | 12:35:02.050 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.050647 | orchestrator | 12:35:02.050 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.050651 | orchestrator | 12:35:02.050 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.050657 | orchestrator | 12:35:02.050 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.050696 | orchestrator | 12:35:02.050 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.050751 | orchestrator | 12:35:02.050 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.050766 | orchestrator | 12:35:02.050 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.050881 | orchestrator | 12:35:02.050 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.050892 | orchestrator | 12:35:02.050 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.050896 | orchestrator | 12:35:02.050 STDOUT terraform:  } 2025-06-02 12:35:02.050901 | orchestrator | 12:35:02.050 STDOUT terraform:  + network { 2025-06-02 12:35:02.050905 | orchestrator | 12:35:02.050 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.050964 | orchestrator | 12:35:02.050 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.050970 | orchestrator | 12:35:02.050 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.051024 | orchestrator | 12:35:02.050 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.051031 | orchestrator | 12:35:02.050 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.051102 | orchestrator | 12:35:02.051 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.051108 | orchestrator | 12:35:02.051 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.051113 | orchestrator | 12:35:02.051 STDOUT terraform:  } 2025-06-02 12:35:02.051119 | orchestrator | 12:35:02.051 STDOUT terraform:  } 2025-06-02 12:35:02.051188 | orchestrator | 12:35:02.051 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 12:35:02.051283 | orchestrator | 12:35:02.051 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.051291 | orchestrator | 12:35:02.051 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.051345 | orchestrator | 12:35:02.051 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.051381 | orchestrator | 12:35:02.051 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.051472 | orchestrator | 12:35:02.051 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.051478 | orchestrator | 12:35:02.051 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.051482 | orchestrator | 12:35:02.051 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.051510 | orchestrator | 12:35:02.051 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.051569 | orchestrator | 12:35:02.051 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.051576 | orchestrator | 12:35:02.051 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.051607 | orchestrator | 12:35:02.051 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.051647 | orchestrator | 12:35:02.051 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.051743 | orchestrator | 12:35:02.051 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.051748 | orchestrator | 12:35:02.051 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.051754 | orchestrator | 12:35:02.051 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.051830 | orchestrator | 12:35:02.051 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.051840 | orchestrator | 12:35:02.051 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 12:35:02.051846 | orchestrator | 12:35:02.051 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.051931 | orchestrator | 12:35:02.051 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.051936 | orchestrator | 12:35:02.051 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.051942 | orchestrator | 12:35:02.051 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.051992 | orchestrator | 12:35:02.051 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.052056 | orchestrator | 12:35:02.051 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.052064 | orchestrator | 12:35:02.052 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.052108 | orchestrator | 12:35:02.052 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.052132 | orchestrator | 12:35:02.052 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.052170 | orchestrator | 12:35:02.052 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.052221 | orchestrator | 12:35:02.052 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.052284 | orchestrator | 12:35:02.052 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.052292 | orchestrator | 12:35:02.052 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.052299 | orchestrator | 12:35:02.052 STDOUT terraform:  } 2025-06-02 12:35:02.052322 | orchestrator | 12:35:02.052 STDOUT terraform:  + network { 2025-06-02 12:35:02.052352 | orchestrator | 12:35:02.052 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.052405 | orchestrator | 12:35:02.052 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.052413 | orchestrator | 12:35:02.052 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.052459 | orchestrator | 12:35:02.052 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.052489 | orchestrator | 12:35:02.052 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.052531 | orchestrator | 12:35:02.052 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.052607 | orchestrator | 12:35:02.052 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.052612 | orchestrator | 12:35:02.052 STDOUT terraform:  } 2025-06-02 12:35:02.052616 | orchestrator | 12:35:02.052 STDOUT terraform:  } 2025-06-02 12:35:02.052704 | orchestrator | 12:35:02.052 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 12:35:02.052709 | orchestrator | 12:35:02.052 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.052715 | orchestrator | 12:35:02.052 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.052761 | orchestrator | 12:35:02.052 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.052824 | orchestrator | 12:35:02.052 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.052832 | orchestrator | 12:35:02.052 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.052879 | orchestrator | 12:35:02.052 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.052886 | orchestrator | 12:35:02.052 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.052932 | orchestrator | 12:35:02.052 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.052970 | orchestrator | 12:35:02.052 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.053001 | orchestrator | 12:35:02.052 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.053039 | orchestrator | 12:35:02.052 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.053164 | orchestrator | 12:35:02.053 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.053170 | orchestrator | 12:35:02.053 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.053174 | orchestrator | 12:35:02.053 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.053180 | orchestrator | 12:35:02.053 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.053234 | orchestrator | 12:35:02.053 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.053258 | orchestrator | 12:35:02.053 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 12:35:02.053291 | orchestrator | 12:35:02.053 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.053364 | orchestrator | 12:35:02.053 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.053376 | orchestrator | 12:35:02.053 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.053405 | orchestrator | 12:35:02.053 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.053453 | orchestrator | 12:35:02.053 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.053508 | orchestrator | 12:35:02.053 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.053515 | orchestrator | 12:35:02.053 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.053574 | orchestrator | 12:35:02.053 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.053584 | orchestrator | 12:35:02.053 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.053607 | orchestrator | 12:35:02.053 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.053647 | orchestrator | 12:35:02.053 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.053676 | orchestrator | 12:35:02.053 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.053753 | orchestrator | 12:35:02.053 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.053758 | orchestrator | 12:35:02.053 STDOUT terraform:  } 2025-06-02 12:35:02.053762 | orchestrator | 12:35:02.053 STDOUT terraform:  + network { 2025-06-02 12:35:02.053768 | orchestrator | 12:35:02.053 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.053829 | orchestrator | 12:35:02.053 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.053836 | orchestrator | 12:35:02.053 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.053909 | orchestrator | 12:35:02.053 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.053920 | orchestrator | 12:35:02.053 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.053948 | orchestrator | 12:35:02.053 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.054069 | orchestrator | 12:35:02.053 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.054076 | orchestrator | 12:35:02.053 STDOUT terraform:  } 2025-06-02 12:35:02.054081 | orchestrator | 12:35:02.053 STDOUT terraform:  } 2025-06-02 12:35:02.054169 | orchestrator | 12:35:02.054 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 12:35:02.054177 | orchestrator | 12:35:02.054 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.054229 | orchestrator | 12:35:02.054 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.054271 | orchestrator | 12:35:02.054 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.054323 | orchestrator | 12:35:02.054 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.054364 | orchestrator | 12:35:02.054 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.054371 | orchestrator | 12:35:02.054 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.054433 | orchestrator | 12:35:02.054 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.054444 | orchestrator | 12:35:02.054 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.054493 | orchestrator | 12:35:02.054 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.054636 | orchestrator | 12:35:02.054 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.054646 | orchestrator | 12:35:02.054 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.054650 | orchestrator | 12:35:02.054 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.054654 | orchestrator | 12:35:02.054 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.054659 | orchestrator | 12:35:02.054 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.054689 | orchestrator | 12:35:02.054 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.054718 | orchestrator | 12:35:02.054 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.054856 | orchestrator | 12:35:02.054 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 12:35:02.054861 | orchestrator | 12:35:02.054 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.054865 | orchestrator | 12:35:02.054 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.054869 | orchestrator | 12:35:02.054 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.054874 | orchestrator | 12:35:02.054 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.054924 | orchestrator | 12:35:02.054 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.054987 | orchestrator | 12:35:02.054 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.054992 | orchestrator | 12:35:02.054 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.055056 | orchestrator | 12:35:02.054 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.055061 | orchestrator | 12:35:02.055 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.055067 | orchestrator | 12:35:02.055 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.055203 | orchestrator | 12:35:02.055 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.055209 | orchestrator | 12:35:02.055 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.055213 | orchestrator | 12:35:02.055 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.055217 | orchestrator | 12:35:02.055 STDOUT terraform:  } 2025-06-02 12:35:02.055223 | orchestrator | 12:35:02.055 STDOUT terraform:  + network { 2025-06-02 12:35:02.055342 | orchestrator | 12:35:02.055 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.055347 | orchestrator | 12:35:02.055 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.055351 | orchestrator | 12:35:02.055 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.055355 | orchestrator | 12:35:02.055 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.055471 | orchestrator | 12:35:02.055 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.055477 | orchestrator | 12:35:02.055 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.055480 | orchestrator | 12:35:02.055 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.055484 | orchestrator | 12:35:02.055 STDOUT terraform:  } 2025-06-02 12:35:02.055488 | orchestrator | 12:35:02.055 STDOUT terraform:  } 2025-06-02 12:35:02.055516 | orchestrator | 12:35:02.055 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 12:35:02.055604 | orchestrator | 12:35:02.055 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.055609 | orchestrator | 12:35:02.055 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.055662 | orchestrator | 12:35:02.055 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.055671 | orchestrator | 12:35:02.055 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.055740 | orchestrator | 12:35:02.055 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.055747 | orchestrator | 12:35:02.055 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.055787 | orchestrator | 12:35:02.055 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.055835 | orchestrator | 12:35:02.055 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.055848 | orchestrator | 12:35:02.055 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.055888 | orchestrator | 12:35:02.055 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.055933 | orchestrator | 12:35:02.055 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.055974 | orchestrator | 12:35:02.055 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.056010 | orchestrator | 12:35:02.055 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.056063 | orchestrator | 12:35:02.055 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.056076 | orchestrator | 12:35:02.056 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.056115 | orchestrator | 12:35:02.056 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.056167 | orchestrator | 12:35:02.056 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 12:35:02.056174 | orchestrator | 12:35:02.056 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.056289 | orchestrator | 12:35:02.056 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.056295 | orchestrator | 12:35:02.056 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.056299 | orchestrator | 12:35:02.056 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.056342 | orchestrator | 12:35:02.056 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.056442 | orchestrator | 12:35:02.056 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.056454 | orchestrator | 12:35:02.056 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.056501 | orchestrator | 12:35:02.056 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.056549 | orchestrator | 12:35:02.056 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.056558 | orchestrator | 12:35:02.056 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.056677 | orchestrator | 12:35:02.056 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.056683 | orchestrator | 12:35:02.056 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.056687 | orchestrator | 12:35:02.056 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.056691 | orchestrator | 12:35:02.056 STDOUT terraform:  } 2025-06-02 12:35:02.056697 | orchestrator | 12:35:02.056 STDOUT terraform:  + network { 2025-06-02 12:35:02.056701 | orchestrator | 12:35:02.056 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.056789 | orchestrator | 12:35:02.056 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.056795 | orchestrator | 12:35:02.056 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.056800 | orchestrator | 12:35:02.056 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.056848 | orchestrator | 12:35:02.056 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.056891 | orchestrator | 12:35:02.056 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.056903 | orchestrator | 12:35:02.056 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.056914 | orchestrator | 12:35:02.056 STDOUT terraform:  } 2025-06-02 12:35:02.056992 | orchestrator | 12:35:02.056 STDOUT terraform:  } 2025-06-02 12:35:02.056997 | orchestrator | 12:35:02.056 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 12:35:02.057105 | orchestrator | 12:35:02.056 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:02.057115 | orchestrator | 12:35:02.057 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:02.057122 | orchestrator | 12:35:02.057 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:02.057245 | orchestrator | 12:35:02.057 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:02.057252 | orchestrator | 12:35:02.057 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.057256 | orchestrator | 12:35:02.057 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:02.057261 | orchestrator | 12:35:02.057 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:02.057353 | orchestrator | 12:35:02.057 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:02.057358 | orchestrator | 12:35:02.057 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:02.057364 | orchestrator | 12:35:02.057 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:02.057417 | orchestrator | 12:35:02.057 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:02.057424 | orchestrator | 12:35:02.057 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:02.057484 | orchestrator | 12:35:02.057 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.057539 | orchestrator | 12:35:02.057 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:02.057570 | orchestrator | 12:35:02.057 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:02.057618 | orchestrator | 12:35:02.057 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:02.057630 | orchestrator | 12:35:02.057 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 12:35:02.057711 | orchestrator | 12:35:02.057 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:02.057716 | orchestrator | 12:35:02.057 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.057722 | orchestrator | 12:35:02.057 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:02.057789 | orchestrator | 12:35:02.057 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:02.057796 | orchestrator | 12:35:02.057 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:02.057870 | orchestrator | 12:35:02.057 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:02.057877 | orchestrator | 12:35:02.057 STDOUT terraform:  + block_device { 2025-06-02 12:35:02.057973 | orchestrator | 12:35:02.057 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:02.057979 | orchestrator | 12:35:02.057 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:02.057983 | orchestrator | 12:35:02.057 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:02.057993 | orchestrator | 12:35:02.057 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:02.058057 | orchestrator | 12:35:02.057 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:02.058145 | orchestrator | 12:35:02.058 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.058150 | orchestrator | 12:35:02.058 STDOUT terraform:  } 2025-06-02 12:35:02.058154 | orchestrator | 12:35:02.058 STDOUT terraform:  + network { 2025-06-02 12:35:02.058160 | orchestrator | 12:35:02.058 STDOUT terraform:  + access_network = false 2025-06-02 12:35:02.058243 | orchestrator | 12:35:02.058 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:02.058255 | orchestrator | 12:35:02.058 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:02.058348 | orchestrator | 12:35:02.058 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:02.058354 | orchestrator | 12:35:02.058 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:02.058360 | orchestrator | 12:35:02.058 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:02.058424 | orchestrator | 12:35:02.058 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:02.058430 | orchestrator | 12:35:02.058 STDOUT terraform:  } 2025-06-02 12:35:02.058434 | orchestrator | 12:35:02.058 STDOUT terraform:  } 2025-06-02 12:35:02.058441 | orchestrator | 12:35:02.058 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 12:35:02.058510 | orchestrator | 12:35:02.058 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 12:35:02.058521 | orchestrator | 12:35:02.058 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 12:35:02.058565 | orchestrator | 12:35:02.058 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.058571 | orchestrator | 12:35:02.058 STDOUT terraform:  + name = "testbed" 2025-06-02 12:35:02.058622 | orchestrator | 12:35:02.058 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 12:35:02.058630 | orchestrator | 12:35:02.058 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 12:35:02.058697 | orchestrator | 12:35:02.058 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.058706 | orchestrator | 12:35:02.058 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 12:35:02.058712 | orchestrator | 12:35:02.058 STDOUT terraform:  } 2025-06-02 12:35:02.058781 | orchestrator | 12:35:02.058 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 12:35:02.058842 | orchestrator | 12:35:02.058 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.058876 | orchestrator | 12:35:02.058 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.058914 | orchestrator | 12:35:02.058 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.058923 | orchestrator | 12:35:02.058 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.058988 | orchestrator | 12:35:02.058 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.058997 | orchestrator | 12:35:02.058 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.059007 | orchestrator | 12:35:02.058 STDOUT terraform:  } 2025-06-02 12:35:02.059061 | orchestrator | 12:35:02.058 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 12:35:02.059157 | orchestrator | 12:35:02.059 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.059162 | orchestrator | 12:35:02.059 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.059169 | orchestrator | 12:35:02.059 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.059255 | orchestrator | 12:35:02.059 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.059261 | orchestrator | 12:35:02.059 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.059330 | orchestrator | 12:35:02.059 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.059335 | orchestrator | 12:35:02.059 STDOUT terraform:  } 2025-06-02 12:35:02.059341 | orchestrator | 12:35:02.059 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 12:35:02.059433 | orchestrator | 12:35:02.059 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.059441 | orchestrator | 12:35:02.059 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.059478 | orchestrator | 12:35:02.059 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.059513 | orchestrator | 12:35:02.059 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.059525 | orchestrator | 12:35:02.059 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.059710 | orchestrator | 12:35:02.059 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.059716 | orchestrator | 12:35:02.059 STDOUT terraform:  } 2025-06-02 12:35:02.059720 | orchestrator | 12:35:02.059 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 12:35:02.059724 | orchestrator | 12:35:02.059 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.059728 | orchestrator | 12:35:02.059 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.059733 | orchestrator | 12:35:02.059 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.059782 | orchestrator | 12:35:02.059 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.059813 | orchestrator | 12:35:02.059 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.059852 | orchestrator | 12:35:02.059 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.059858 | orchestrator | 12:35:02.059 STDOUT terraform:  } 2025-06-02 12:35:02.059915 | orchestrator | 12:35:02.059 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 12:35:02.059979 | orchestrator | 12:35:02.059 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.059987 | orchestrator | 12:35:02.059 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.060055 | orchestrator | 12:35:02.059 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.060065 | orchestrator | 12:35:02.060 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.060124 | orchestrator | 12:35:02.060 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.060133 | orchestrator | 12:35:02.060 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.060139 | orchestrator | 12:35:02.060 STDOUT terraform:  } 2025-06-02 12:35:02.060225 | orchestrator | 12:35:02.060 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 12:35:02.060270 | orchestrator | 12:35:02.060 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.060278 | orchestrator | 12:35:02.060 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.060334 | orchestrator | 12:35:02.060 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.060342 | orchestrator | 12:35:02.060 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.060471 | orchestrator | 12:35:02.060 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.060482 | orchestrator | 12:35:02.060 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.060485 | orchestrator | 12:35:02.060 STDOUT terraform:  } 2025-06-02 12:35:02.060489 | orchestrator | 12:35:02.060 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 12:35:02.060558 | orchestrator | 12:35:02.060 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.060653 | orchestrator | 12:35:02.060 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.060659 | orchestrator | 12:35:02.060 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.060663 | orchestrator | 12:35:02.060 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.060668 | orchestrator | 12:35:02.060 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.060711 | orchestrator | 12:35:02.060 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.060723 | orchestrator | 12:35:02.060 STDOUT terraform:  } 2025-06-02 12:35:02.060784 | orchestrator | 12:35:02.060 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 12:35:02.060872 | orchestrator | 12:35:02.060 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.060877 | orchestrator | 12:35:02.060 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.060906 | orchestrator | 12:35:02.060 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.060941 | orchestrator | 12:35:02.060 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.061061 | orchestrator | 12:35:02.060 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.061070 | orchestrator | 12:35:02.060 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.061077 | orchestrator | 12:35:02.061 STDOUT terraform:  } 2025-06-02 12:35:02.061138 | orchestrator | 12:35:02.061 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 12:35:02.061180 | orchestrator | 12:35:02.061 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:02.061188 | orchestrator | 12:35:02.061 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:02.061248 | orchestrator | 12:35:02.061 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.061275 | orchestrator | 12:35:02.061 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:02.061388 | orchestrator | 12:35:02.061 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.061394 | orchestrator | 12:35:02.061 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:02.061397 | orchestrator | 12:35:02.061 STDOUT terraform:  } 2025-06-02 12:35:02.061421 | orchestrator | 12:35:02.061 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 12:35:02.061511 | orchestrator | 12:35:02.061 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 12:35:02.061519 | orchestrator | 12:35:02.061 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 12:35:02.061574 | orchestrator | 12:35:02.061 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 12:35:02.061581 | orchestrator | 12:35:02.061 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.061644 | orchestrator | 12:35:02.061 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:02.061652 | orchestrator | 12:35:02.061 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.061656 | orchestrator | 12:35:02.061 STDOUT terraform:  } 2025-06-02 12:35:02.061774 | orchestrator | 12:35:02.061 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 12:35:02.061780 | orchestrator | 12:35:02.061 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 12:35:02.061785 | orchestrator | 12:35:02.061 STDOUT terraform:  + address = (known after apply) 2025-06-02 12:35:02.061857 | orchestrator | 12:35:02.061 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.061862 | orchestrator | 12:35:02.061 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 12:35:02.061867 | orchestrator | 12:35:02.061 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.061903 | orchestrator | 12:35:02.061 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 12:35:02.061971 | orchestrator | 12:35:02.061 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.061977 | orchestrator | 12:35:02.061 STDOUT terraform:  + pool = "public" 2025-06-02 12:35:02.061983 | orchestrator | 12:35:02.061 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:02.062070 | orchestrator | 12:35:02.061 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.062079 | orchestrator | 12:35:02.061 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.062083 | orchestrator | 12:35:02.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.062089 | orchestrator | 12:35:02.062 STDOUT terraform:  } 2025-06-02 12:35:02.062180 | orchestrator | 12:35:02.062 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 12:35:02.062206 | orchestrator | 12:35:02.062 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 12:35:02.062255 | orchestrator | 12:35:02.062 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.062303 | orchestrator | 12:35:02.062 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.062325 | orchestrator | 12:35:02.062 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 12:35:02.062331 | orchestrator | 12:35:02.062 STDOUT terraform:  + "nova", 2025-06-02 12:35:02.062340 | orchestrator | 12:35:02.062 STDOUT terraform:  ] 2025-06-02 12:35:02.062438 | orchestrator | 12:35:02.062 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 12:35:02.062444 | orchestrator | 12:35:02.062 STDOUT terraform:  + external = (known after apply) 2025-06-02 12:35:02.062485 | orchestrator | 12:35:02.062 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.062525 | orchestrator | 12:35:02.062 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 12:35:02.062568 | orchestrator | 12:35:02.062 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 12:35:02.062672 | orchestrator | 12:35:02.062 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.062682 | orchestrator | 12:35:02.062 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.062688 | orchestrator | 12:35:02.062 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.062744 | orchestrator | 12:35:02.062 STDOUT terraform:  + shared = (known after apply) 2025-06-02 12:35:02.062784 | orchestrator | 12:35:02.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.062824 | orchestrator | 12:35:02.062 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 12:35:02.062909 | orchestrator | 12:35:02.062 STDOUT terraform:  + segments (known after apply) 2025-06-02 12:35:02.062915 | orchestrator | 12:35:02.062 STDOUT terraform:  } 2025-06-02 12:35:02.062919 | orchestrator | 12:35:02.062 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 12:35:02.062980 | orchestrator | 12:35:02.062 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 12:35:02.063021 | orchestrator | 12:35:02.062 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.063073 | orchestrator | 12:35:02.063 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.063124 | orchestrator | 12:35:02.063 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.063132 | orchestrator | 12:35:02.063 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.063218 | orchestrator | 12:35:02.063 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.063227 | orchestrator | 12:35:02.063 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.063295 | orchestrator | 12:35:02.063 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.063303 | orchestrator | 12:35:02.063 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.063455 | orchestrator | 12:35:02.063 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.063467 | orchestrator | 12:35:02.063 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.063472 | orchestrator | 12:35:02.063 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.063475 | orchestrator | 12:35:02.063 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.063606 | orchestrator | 12:35:02.063 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.063616 | orchestrator | 12:35:02.063 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.063620 | orchestrator | 12:35:02.063 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.063626 | orchestrator | 12:35:02.063 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.063633 | orchestrator | 12:35:02.063 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.063689 | orchestrator | 12:35:02.063 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.063693 | orchestrator | 12:35:02.063 STDOUT terraform:  } 2025-06-02 12:35:02.063699 | orchestrator | 12:35:02.063 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.063758 | orchestrator | 12:35:02.063 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.063878 | orchestrator | 12:35:02.063 STDOUT terraform:  } 2025-06-02 12:35:02.063897 | orchestrator | 12:35:02.063 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.063972 | orchestrator | 12:35:02.063 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.063976 | orchestrator | 12:35:02.063 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 12:35:02.063980 | orchestrator | 12:35:02.063 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.063984 | orchestrator | 12:35:02.063 STDOUT terraform:  } 2025-06-02 12:35:02.063988 | orchestrator | 12:35:02.063 STDOUT terraform:  } 2025-06-02 12:35:02.063994 | orchestrator | 12:35:02.063 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 12:35:02.063998 | orchestrator | 12:35:02.063 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.064002 | orchestrator | 12:35:02.063 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.064121 | orchestrator | 12:35:02.063 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.064174 | orchestrator | 12:35:02.064 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.064217 | orchestrator | 12:35:02.064 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.064224 | orchestrator | 12:35:02.064 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.064227 | orchestrator | 12:35:02.064 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.064260 | orchestrator | 12:35:02.064 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.064367 | orchestrator | 12:35:02.064 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.064430 | orchestrator | 12:35:02.064 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.064457 | orchestrator | 12:35:02.064 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.064463 | orchestrator | 12:35:02.064 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.064467 | orchestrator | 12:35:02.064 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.064539 | orchestrator | 12:35:02.064 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.064632 | orchestrator | 12:35:02.064 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.064677 | orchestrator | 12:35:02.064 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.064692 | orchestrator | 12:35:02.064 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.064721 | orchestrator | 12:35:02.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.064725 | orchestrator | 12:35:02.064 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.064731 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.064735 | orchestrator | 12:35:02.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.064763 | orchestrator | 12:35:02.064 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.064818 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.064868 | orchestrator | 12:35:02.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.064915 | orchestrator | 12:35:02.064 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.064941 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.065053 | orchestrator | 12:35:02.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.065080 | orchestrator | 12:35:02.064 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.065098 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.065224 | orchestrator | 12:35:02.064 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.065246 | orchestrator | 12:35:02.064 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.065250 | orchestrator | 12:35:02.064 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 12:35:02.065254 | orchestrator | 12:35:02.064 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.065324 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.065372 | orchestrator | 12:35:02.064 STDOUT terraform:  } 2025-06-02 12:35:02.065394 | orchestrator | 12:35:02.064 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 12:35:02.065516 | orchestrator | 12:35:02.065 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.065613 | orchestrator | 12:35:02.065 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.065680 | orchestrator | 12:35:02.065 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.065768 | orchestrator | 12:35:02.065 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.065814 | orchestrator | 12:35:02.065 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.065997 | orchestrator | 12:35:02.065 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.066025 | orchestrator | 12:35:02.065 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.066094 | orchestrator | 12:35:02.065 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.066121 | orchestrator | 12:35:02.065 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.066152 | orchestrator | 12:35:02.065 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.066243 | orchestrator | 12:35:02.065 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.066330 | orchestrator | 12:35:02.065 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.066335 | orchestrator | 12:35:02.065 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.066410 | orchestrator | 12:35:02.065 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.066516 | orchestrator | 12:35:02.065 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.066535 | orchestrator | 12:35:02.065 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.066539 | orchestrator | 12:35:02.065 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.066543 | orchestrator | 12:35:02.065 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.066547 | orchestrator | 12:35:02.065 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.066606 | orchestrator | 12:35:02.065 STDOUT terraform:  } 2025-06-02 12:35:02.066715 | orchestrator | 12:35:02.065 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.066759 | orchestrator | 12:35:02.065 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.066785 | orchestrator | 12:35:02.065 STDOUT terraform:  } 2025-06-02 12:35:02.066789 | orchestrator | 12:35:02.065 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.066793 | orchestrator | 12:35:02.065 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.066797 | orchestrator | 12:35:02.065 STDOUT terraform:  } 2025-06-02 12:35:02.066847 | orchestrator | 12:35:02.065 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.066864 | orchestrator | 12:35:02.065 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.066889 | orchestrator | 12:35:02.065 STDOUT terraform:  } 2025-06-02 12:35:02.066893 | orchestrator | 12:35:02.065 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.066897 | orchestrator | 12:35:02.066 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.066901 | orchestrator | 12:35:02.066 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 12:35:02.066919 | orchestrator | 12:35:02.066 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.066923 | orchestrator | 12:35:02.066 STDOUT terraform:  } 2025-06-02 12:35:02.066931 | orchestrator | 12:35:02.066 STDOUT terraform:  } 2025-06-02 12:35:02.067066 | orchestrator | 12:35:02.066 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 12:35:02.067163 | orchestrator | 12:35:02.066 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.067168 | orchestrator | 12:35:02.066 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.067225 | orchestrator | 12:35:02.066 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.067257 | orchestrator | 12:35:02.066 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.067262 | orchestrator | 12:35:02.066 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.067265 | orchestrator | 12:35:02.066 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.067269 | orchestrator | 12:35:02.066 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.067285 | orchestrator | 12:35:02.066 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.067359 | orchestrator | 12:35:02.066 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.067390 | orchestrator | 12:35:02.066 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.067475 | orchestrator | 12:35:02.066 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.067493 | orchestrator | 12:35:02.066 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.067497 | orchestrator | 12:35:02.066 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.067501 | orchestrator | 12:35:02.066 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.067505 | orchestrator | 12:35:02.066 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.067509 | orchestrator | 12:35:02.066 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.067529 | orchestrator | 12:35:02.066 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.067533 | orchestrator | 12:35:02.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.067537 | orchestrator | 12:35:02.066 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.067571 | orchestrator | 12:35:02.066 STDOUT terraform:  } 2025-06-02 12:35:02.067587 | orchestrator | 12:35:02.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.067719 | orchestrator | 12:35:02.066 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.067733 | orchestrator | 12:35:02.066 STDOUT terraform:  } 2025-06-02 12:35:02.067751 | orchestrator | 12:35:02.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.067833 | orchestrator | 12:35:02.066 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.067913 | orchestrator | 12:35:02.067 STDOUT terraform:  } 2025-06-02 12:35:02.067929 | orchestrator | 12:35:02.067 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.068009 | orchestrator | 12:35:02.067 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.068102 | orchestrator | 12:35:02.067 STDOUT terraform:  } 2025-06-02 12:35:02.068107 | orchestrator | 12:35:02.067 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.068126 | orchestrator | 12:35:02.067 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.068130 | orchestrator | 12:35:02.067 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 12:35:02.068133 | orchestrator | 12:35:02.067 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.068137 | orchestrator | 12:35:02.067 STDOUT terraform:  } 2025-06-02 12:35:02.068273 | orchestrator | 12:35:02.067 STDOUT terraform:  } 2025-06-02 12:35:02.068390 | orchestrator | 12:35:02.067 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 12:35:02.068405 | orchestrator | 12:35:02.067 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.068451 | orchestrator | 12:35:02.067 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.068530 | orchestrator | 12:35:02.067 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.068600 | orchestrator | 12:35:02.067 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.068604 | orchestrator | 12:35:02.067 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.068608 | orchestrator | 12:35:02.067 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.068612 | orchestrator | 12:35:02.067 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.068692 | orchestrator | 12:35:02.067 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.068730 | orchestrator | 12:35:02.067 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.068734 | orchestrator | 12:35:02.067 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.068864 | orchestrator | 12:35:02.067 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.068880 | orchestrator | 12:35:02.067 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.068960 | orchestrator | 12:35:02.067 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.068964 | orchestrator | 12:35:02.067 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.068983 | orchestrator | 12:35:02.067 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.068992 | orchestrator | 12:35:02.067 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.068996 | orchestrator | 12:35:02.067 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.069024 | orchestrator | 12:35:02.067 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.069048 | orchestrator | 12:35:02.067 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.069053 | orchestrator | 12:35:02.067 STDOUT terraform:  } 2025-06-02 12:35:02.069056 | orchestrator | 12:35:02.067 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.069060 | orchestrator | 12:35:02.067 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.069185 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.069213 | orchestrator | 12:35:02.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.069242 | orchestrator | 12:35:02.068 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.069257 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.069370 | orchestrator | 12:35:02.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.069396 | orchestrator | 12:35:02.068 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.069400 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.069404 | orchestrator | 12:35:02.068 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.069408 | orchestrator | 12:35:02.068 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.069464 | orchestrator | 12:35:02.068 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 12:35:02.069488 | orchestrator | 12:35:02.068 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.069493 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.069497 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.069501 | orchestrator | 12:35:02.068 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 12:35:02.069505 | orchestrator | 12:35:02.068 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.069531 | orchestrator | 12:35:02.068 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.069564 | orchestrator | 12:35:02.068 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.069569 | orchestrator | 12:35:02.068 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.069606 | orchestrator | 12:35:02.068 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.069656 | orchestrator | 12:35:02.068 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.069724 | orchestrator | 12:35:02.068 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.069742 | orchestrator | 12:35:02.068 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.069785 | orchestrator | 12:35:02.068 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.069789 | orchestrator | 12:35:02.068 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.069793 | orchestrator | 12:35:02.068 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.069797 | orchestrator | 12:35:02.068 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.069840 | orchestrator | 12:35:02.068 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.069855 | orchestrator | 12:35:02.068 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.069870 | orchestrator | 12:35:02.068 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.070045 | orchestrator | 12:35:02.068 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.070273 | orchestrator | 12:35:02.068 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.070352 | orchestrator | 12:35:02.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.070379 | orchestrator | 12:35:02.068 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.070431 | orchestrator | 12:35:02.068 STDOUT terraform:  } 2025-06-02 12:35:02.070504 | orchestrator | 12:35:02.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.070599 | orchestrator | 12:35:02.069 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.070615 | orchestrator | 12:35:02.069 STDOUT terraform:  } 2025-06-02 12:35:02.070795 | orchestrator | 12:35:02.069 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.070823 | orchestrator | 12:35:02.069 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.070838 | orchestrator | 12:35:02.069 STDOUT terraform:  } 2025-06-02 12:35:02.070854 | orchestrator | 12:35:02.069 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.070905 | orchestrator | 12:35:02.069 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.070920 | orchestrator | 12:35:02.069 STDOUT terraform:  } 2025-06-02 12:35:02.070938 | orchestrator | 12:35:02.069 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.070942 | orchestrator | 12:35:02.069 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.070946 | orchestrator | 12:35:02.069 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 12:35:02.070950 | orchestrator | 12:35:02.069 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.071020 | orchestrator | 12:35:02.069 STDOUT terraform:  } 2025-06-02 12:35:02.071024 | orchestrator | 12:35:02.069 STDOUT terraform:  } 2025-06-02 12:35:02.071044 | orchestrator | 12:35:02.069 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 12:35:02.071048 | orchestrator | 12:35:02.069 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:02.071052 | orchestrator | 12:35:02.069 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.071056 | orchestrator | 12:35:02.069 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:02.071100 | orchestrator | 12:35:02.069 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:02.071133 | orchestrator | 12:35:02.069 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.071512 | orchestrator | 12:35:02.069 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:02.071516 | orchestrator | 12:35:02.069 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:02.071520 | orchestrator | 12:35:02.069 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:02.071524 | orchestrator | 12:35:02.069 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:02.071528 | orchestrator | 12:35:02.069 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.071532 | orchestrator | 12:35:02.069 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:02.071539 | orchestrator | 12:35:02.069 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.071543 | orchestrator | 12:35:02.069 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:02.071547 | orchestrator | 12:35:02.069 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:02.071551 | orchestrator | 12:35:02.069 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.071555 | orchestrator | 12:35:02.069 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:02.071558 | orchestrator | 12:35:02.069 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.071565 | orchestrator | 12:35:02.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.071569 | orchestrator | 12:35:02.070 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:02.071573 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071581 | orchestrator | 12:35:02.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.071585 | orchestrator | 12:35:02.070 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:02.071589 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071593 | orchestrator | 12:35:02.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.071597 | orchestrator | 12:35:02.070 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:02.071601 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071605 | orchestrator | 12:35:02.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:02.071609 | orchestrator | 12:35:02.070 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:02.071612 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071616 | orchestrator | 12:35:02.070 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:02.071620 | orchestrator | 12:35:02.070 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:02.071624 | orchestrator | 12:35:02.070 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 12:35:02.071628 | orchestrator | 12:35:02.070 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.071632 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071636 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071639 | orchestrator | 12:35:02.070 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 12:35:02.071643 | orchestrator | 12:35:02.070 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 12:35:02.071647 | orchestrator | 12:35:02.070 STDOUT terraform:  + force_destroy = false 2025-06-02 12:35:02.071651 | orchestrator | 12:35:02.070 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.071655 | orchestrator | 12:35:02.070 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:02.071659 | orchestrator | 12:35:02.070 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.071662 | orchestrator | 12:35:02.070 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 12:35:02.071671 | orchestrator | 12:35:02.070 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:02.071675 | orchestrator | 12:35:02.070 STDOUT terraform:  } 2025-06-02 12:35:02.071679 | orchestrator | 12:35:02.070 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 12:35:02.071683 | orchestrator | 12:35:02.070 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 12:35:02.071687 | orchestrator | 12:35:02.070 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:02.071690 | orchestrator | 12:35:02.070 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.071694 | orchestrator | 12:35:02.070 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 12:35:02.071698 | orchestrator | 12:35:02.070 STDOUT terraform:  + "nova", 2025-06-02 12:35:02.071702 | orchestrator | 12:35:02.070 STDOUT terraform:  ] 2025-06-02 12:35:02.071707 | orchestrator | 12:35:02.070 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 12:35:02.071710 | orchestrator | 12:35:02.070 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 12:35:02.071714 | orchestrator | 12:35:02.070 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 12:35:02.071718 | orchestrator | 12:35:02.071 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.071722 | orchestrator | 12:35:02.071 STDOUT terraform:  + name = "testbed" 2025-06-02 12:35:02.071726 | orchestrator | 12:35:02.071 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.071730 | orchestrator | 12:35:02.071 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.071734 | orchestrator | 12:35:02.071 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 12:35:02.071737 | orchestrator | 12:35:02.071 STDOUT terraform:  } 2025-06-02 12:35:02.071746 | orchestrator | 12:35:02.071 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 12:35:02.071750 | orchestrator | 12:35:02.071 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 12:35:02.071754 | orchestrator | 12:35:02.071 STDOUT terraform:  + description = "ssh" 2025-06-02 12:35:02.071758 | orchestrator | 12:35:02.071 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.071762 | orchestrator | 12:35:02.071 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.071766 | orchestrator | 12:35:02.071 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.071770 | orchestrator | 12:35:02.071 STDOUT terraform:  + port_range_max = 22 2025-06-02 12:35:02.071773 | orchestrator | 12:35:02.071 STDOUT terraform:  + port_range_min = 22 2025-06-02 12:35:02.071777 | orchestrator | 12:35:02.071 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:02.071781 | orchestrator | 12:35:02.071 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.071785 | orchestrator | 12:35:02.071 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.071789 | orchestrator | 12:35:02.071 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.071798 | orchestrator | 12:35:02.071 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.071802 | orchestrator | 12:35:02.071 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.071806 | orchestrator | 12:35:02.071 STDOUT terraform:  } 2025-06-02 12:35:02.071812 | orchestrator | 12:35:02.071 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 12:35:02.071816 | orchestrator | 12:35:02.071 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 12:35:02.071931 | orchestrator | 12:35:02.071 STDOUT terraform:  + description = "wireguard" 2025-06-02 12:35:02.071972 | orchestrator | 12:35:02.071 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.071976 | orchestrator | 12:35:02.071 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.071980 | orchestrator | 12:35:02.071 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.072013 | orchestrator | 12:35:02.071 STDOUT terraform:  + port_range_max = 51820 2025-06-02 12:35:02.072018 | orchestrator | 12:35:02.071 STDOUT terraform:  + port_range_min = 51820 2025-06-02 12:35:02.072059 | orchestrator | 12:35:02.071 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:02.072101 | orchestrator | 12:35:02.071 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.072147 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.072152 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.072222 | orchestrator | 12:35:02.072 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.072229 | orchestrator | 12:35:02.072 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.072313 | orchestrator | 12:35:02.072 STDOUT terraform:  } 2025-06-02 12:35:02.072319 | orchestrator | 12:35:02.072 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 12:35:02.072363 | orchestrator | 12:35:02.072 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 12:35:02.072444 | orchestrator | 12:35:02.072 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.072459 | orchestrator | 12:35:02.072 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.072508 | orchestrator | 12:35:02.072 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.072534 | orchestrator | 12:35:02.072 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:02.072541 | orchestrator | 12:35:02.072 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.072545 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.072562 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 12:35:02.072580 | orchestrator | 12:35:02.072 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.072584 | orchestrator | 12:35:02.072 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.072592 | orchestrator | 12:35:02.072 STDOUT terraform:  } 2025-06-02 12:35:02.072785 | orchestrator | 12:35:02.072 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 12:35:02.072803 | orchestrator | 12:35:02.072 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 12:35:02.072830 | orchestrator | 12:35:02.072 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.073042 | orchestrator | 12:35:02.072 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.073095 | orchestrator | 12:35:02.072 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.073125 | orchestrator | 12:35:02.072 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:02.073130 | orchestrator | 12:35:02.072 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.073136 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.073154 | orchestrator | 12:35:02.072 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 12:35:02.073228 | orchestrator | 12:35:02.072 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.073293 | orchestrator | 12:35:02.072 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.073308 | orchestrator | 12:35:02.072 STDOUT terraform:  } 2025-06-02 12:35:02.073344 | orchestrator | 12:35:02.072 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 12:35:02.073436 | orchestrator | 12:35:02.072 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 12:35:02.073463 | orchestrator | 12:35:02.073 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.073510 | orchestrator | 12:35:02.073 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.073604 | orchestrator | 12:35:02.073 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.073651 | orchestrator | 12:35:02.073 STDOUT terraform:  + protocol = "icmp" 2025-06-02 12:35:02.073679 | orchestrator | 12:35:02.073 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.073704 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.073708 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.073712 | orchestrator | 12:35:02.073 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.073716 | orchestrator | 12:35:02.073 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.073786 | orchestrator | 12:35:02.073 STDOUT terraform:  } 2025-06-02 12:35:02.073870 | orchestrator | 12:35:02.073 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 12:35:02.073903 | orchestrator | 12:35:02.073 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 12:35:02.073907 | orchestrator | 12:35:02.073 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.073911 | orchestrator | 12:35:02.073 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.073939 | orchestrator | 12:35:02.073 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.073946 | orchestrator | 12:35:02.073 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:02.073950 | orchestrator | 12:35:02.073 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.073966 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.073982 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.074117 | orchestrator | 12:35:02.073 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.074185 | orchestrator | 12:35:02.073 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.074266 | orchestrator | 12:35:02.073 STDOUT terraform:  } 2025-06-02 12:35:02.074419 | orchestrator | 12:35:02.073 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 12:35:02.074538 | orchestrator | 12:35:02.073 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 12:35:02.074555 | orchestrator | 12:35:02.073 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.074599 | orchestrator | 12:35:02.073 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.074629 | orchestrator | 12:35:02.073 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.074657 | orchestrator | 12:35:02.073 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:02.074673 | orchestrator | 12:35:02.073 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.074692 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.074696 | orchestrator | 12:35:02.073 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.074700 | orchestrator | 12:35:02.073 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.074733 | orchestrator | 12:35:02.073 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.074739 | orchestrator | 12:35:02.073 STDOUT terraform:  } 2025-06-02 12:35:02.074930 | orchestrator | 12:35:02.074 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 12:35:02.074976 | orchestrator | 12:35:02.074 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 12:35:02.075004 | orchestrator | 12:35:02.074 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.075061 | orchestrator | 12:35:02.074 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.075066 | orchestrator | 12:35:02.074 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.075142 | orchestrator | 12:35:02.074 STDOUT terraform:  + protocol = "icmp" 2025-06-02 12:35:02.075171 | orchestrator | 12:35:02.074 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.075212 | orchestrator | 12:35:02.074 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.075217 | orchestrator | 12:35:02.074 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.075221 | orchestrator | 12:35:02.074 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.075258 | orchestrator | 12:35:02.074 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.075303 | orchestrator | 12:35:02.074 STDOUT terraform:  } 2025-06-02 12:35:02.075319 | orchestrator | 12:35:02.074 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 12:35:02.075347 | orchestrator | 12:35:02.074 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 12:35:02.075352 | orchestrator | 12:35:02.074 STDOUT terraform:  + description = "vrrp" 2025-06-02 12:35:02.075356 | orchestrator | 12:35:02.074 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:02.075360 | orchestrator | 12:35:02.074 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:02.075441 | orchestrator | 12:35:02.074 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.075551 | orchestrator | 12:35:02.074 STDOUT terraform:  + protocol = "112" 2025-06-02 12:35:02.075578 | orchestrator | 12:35:02.074 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.075582 | orchestrator | 12:35:02.074 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:02.075586 | orchestrator | 12:35:02.074 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:02.075590 | orchestrator | 12:35:02.074 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:02.075609 | orchestrator | 12:35:02.074 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.075613 | orchestrator | 12:35:02.074 STDOUT terraform:  } 2025-06-02 12:35:02.075617 | orchestrator | 12:35:02.074 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 12:35:02.075621 | orchestrator | 12:35:02.074 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 12:35:02.075666 | orchestrator | 12:35:02.074 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.075682 | orchestrator | 12:35:02.074 STDOUT terraform:  + description = "management security group" 2025-06-02 12:35:02.075726 | orchestrator | 12:35:02.074 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.075731 | orchestrator | 12:35:02.074 STDOUT terraform:  + name = "testbed-management" 2025-06-02 12:35:02.075735 | orchestrator | 12:35:02.075 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.075754 | orchestrator | 12:35:02.075 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 12:35:02.075758 | orchestrator | 12:35:02.075 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.075761 | orchestrator | 12:35:02.075 STDOUT terraform:  } 2025-06-02 12:35:02.075780 | orchestrator | 12:35:02.075 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 12:35:02.075784 | orchestrator | 12:35:02.075 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 12:35:02.075788 | orchestrator | 12:35:02.075 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.075795 | orchestrator | 12:35:02.075 STDOUT terraform:  + description = "node security group" 2025-06-02 12:35:02.075917 | orchestrator | 12:35:02.075 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.075969 | orchestrator | 12:35:02.075 STDOUT terraform:  + name = "testbed-node" 2025-06-02 12:35:02.076014 | orchestrator | 12:35:02.075 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.076062 | orchestrator | 12:35:02.075 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 12:35:02.076087 | orchestrator | 12:35:02.075 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.076091 | orchestrator | 12:35:02.075 STDOUT terraform:  } 2025-06-02 12:35:02.076095 | orchestrator | 12:35:02.075 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 12:35:02.076099 | orchestrator | 12:35:02.075 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 12:35:02.076153 | orchestrator | 12:35:02.075 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:02.076168 | orchestrator | 12:35:02.075 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 12:35:02.076229 | orchestrator | 12:35:02.075 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 12:35:02.076258 | orchestrator | 12:35:02.075 STDOUT terraform:  + "8.8.8.8", 2025-06-02 12:35:02.076321 | orchestrator | 12:35:02.075 STDOUT terraform:  + "9.9.9.9", 2025-06-02 12:35:02.076347 | orchestrator | 12:35:02.075 STDOUT terraform:  ] 2025-06-02 12:35:02.076363 | orchestrator | 12:35:02.075 STDOUT terraform:  + enable_dhcp = true 2025-06-02 12:35:02.076491 | orchestrator | 12:35:02.075 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 12:35:02.076507 | orchestrator | 12:35:02.075 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.076559 | orchestrator | 12:35:02.075 STDOUT terraform:  + ip_version = 4 2025-06-02 12:35:02.076564 | orchestrator | 12:35:02.075 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 12:35:02.076693 | orchestrator | 12:35:02.075 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 12:35:02.076740 | orchestrator | 12:35:02.075 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 12:35:02.076781 | orchestrator | 12:35:02.075 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:02.076785 | orchestrator | 12:35:02.075 STDOUT terraform:  + no_gateway = false 2025-06-02 12:35:02.076789 | orchestrator | 12:35:02.075 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:02.076804 | orchestrator | 12:35:02.075 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 12:35:02.076819 | orchestrator | 12:35:02.075 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:02.076871 | orchestrator | 12:35:02.076 STDOUT terraform:  + allocation_pool { 2025-06-02 12:35:02.076876 | orchestrator | 12:35:02.076 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 12:35:02.076880 | orchestrator | 12:35:02.076 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 12:35:02.076896 | orchestrator | 12:35:02.076 STDOUT terraform:  } 2025-06-02 12:35:02.076900 | orchestrator | 12:35:02.076 STDOUT terraform:  } 2025-06-02 12:35:02.076907 | orchestrator | 12:35:02.076 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 12:35:02.077118 | orchestrator | 12:35:02.076 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 12:35:02.077162 | orchestrator | 12:35:02.076 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.077204 | orchestrator | 12:35:02.076 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 12:35:02.077284 | orchestrator | 12:35:02.076 STDOUT terraform:  + output = (known after apply) 2025-06-02 12:35:02.077289 | orchestrator | 12:35:02.076 STDOUT terraform:  } 2025-06-02 12:35:02.077341 | orchestrator | 12:35:02.076 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 12:35:02.077357 | orchestrator | 12:35:02.076 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 12:35:02.077375 | orchestrator | 12:35:02.076 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:02.077379 | orchestrator | 12:35:02.076 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 12:35:02.077394 | orchestrator | 12:35:02.076 STDOUT terraform:  + output = (known after apply) 2025-06-02 12:35:02.077558 | orchestrator | 12:35:02.076 STDOUT terraform:  } 2025-06-02 12:35:02.077573 | orchestrator | 12:35:02.076 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 12:35:02.077615 | orchestrator | 12:35:02.076 STDOUT terraform: Changes to Outputs: 2025-06-02 12:35:02.077619 | orchestrator | 12:35:02.076 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 12:35:02.077714 | orchestrator | 12:35:02.076 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 12:35:02.184838 | orchestrator | 12:35:02.184 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 12:35:02.185081 | orchestrator | 12:35:02.184 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=3648d7ad-e128-bcf1-4ec3-1943402246d6] 2025-06-02 12:35:02.242644 | orchestrator | 12:35:02.241 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 12:35:02.242709 | orchestrator | 12:35:02.242 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0aebfc43-2b9f-f0f0-d055-acf307b5389f] 2025-06-02 12:35:02.257298 | orchestrator | 12:35:02.257 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 12:35:02.274274 | orchestrator | 12:35:02.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 12:35:02.274729 | orchestrator | 12:35:02.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 12:35:02.277234 | orchestrator | 12:35:02.276 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 12:35:02.278442 | orchestrator | 12:35:02.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 12:35:02.278990 | orchestrator | 12:35:02.278 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 12:35:02.280614 | orchestrator | 12:35:02.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 12:35:02.280747 | orchestrator | 12:35:02.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 12:35:02.282921 | orchestrator | 12:35:02.282 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 12:35:02.294712 | orchestrator | 12:35:02.294 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 12:35:02.717841 | orchestrator | 12:35:02.715 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 12:35:02.725714 | orchestrator | 12:35:02.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 12:35:02.743888 | orchestrator | 12:35:02.743 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-02 12:35:02.747997 | orchestrator | 12:35:02.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 12:35:08.333547 | orchestrator | 12:35:08.333 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=af54d33a-7507-4d6c-83de-9c476af39fee] 2025-06-02 12:35:08.337615 | orchestrator | 12:35:08.337 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 12:35:08.386115 | orchestrator | 12:35:08.385 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 12:35:08.400188 | orchestrator | 12:35:08.399 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 12:35:12.278411 | orchestrator | 12:35:12.277 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 12:35:12.278879 | orchestrator | 12:35:12.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 12:35:12.280436 | orchestrator | 12:35:12.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 12:35:12.281617 | orchestrator | 12:35:12.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 12:35:12.282749 | orchestrator | 12:35:12.282 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 12:35:12.283944 | orchestrator | 12:35:12.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 12:35:12.296256 | orchestrator | 12:35:12.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:12.726292 | orchestrator | 12:35:12.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 12:35:12.749347 | orchestrator | 12:35:12.749 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 12:35:12.860983 | orchestrator | 12:35:12.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=456d640a-c6eb-4569-8c8e-a4a3fdd3e000] 2025-06-02 12:35:12.873862 | orchestrator | 12:35:12.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=fa9eac55-b7ba-400b-ad39-8d51d062dfbf] 2025-06-02 12:35:12.877493 | orchestrator | 12:35:12.877 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 12:35:12.883951 | orchestrator | 12:35:12.883 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=dc6882bf-da04-4edd-9882-73e1f985245e] 2025-06-02 12:35:12.885410 | orchestrator | 12:35:12.885 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=e244f1d3e9425a07ab37618850a7e95f796014e6] 2025-06-02 12:35:12.886412 | orchestrator | 12:35:12.886 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 12:35:12.890737 | orchestrator | 12:35:12.889 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=55dfaeb386f43d3e34e457c8a3cff539b54193b6] 2025-06-02 12:35:12.892547 | orchestrator | 12:35:12.891 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 12:35:12.892588 | orchestrator | 12:35:12.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 12:35:12.898721 | orchestrator | 12:35:12.898 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 12:35:12.908617 | orchestrator | 12:35:12.908 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=efdd6e96-769c-48d5-86b4-ee9af75744a8] 2025-06-02 12:35:12.910560 | orchestrator | 12:35:12.910 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b] 2025-06-02 12:35:12.919068 | orchestrator | 12:35:12.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 12:35:12.920025 | orchestrator | 12:35:12.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=23117054-a818-47a4-b6cc-218c8fcf9ce0] 2025-06-02 12:35:12.921231 | orchestrator | 12:35:12.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 12:35:12.926137 | orchestrator | 12:35:12.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 12:35:12.954545 | orchestrator | 12:35:12.954 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=58632b91-4ff4-425f-9799-2cbdbd75f857] 2025-06-02 12:35:12.962121 | orchestrator | 12:35:12.961 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 12:35:12.969520 | orchestrator | 12:35:12.969 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=d9b7d288-6907-4dde-a5ec-8795086443a7] 2025-06-02 12:35:13.004967 | orchestrator | 12:35:13.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=f20c7008-f12c-46ab-b284-b84010eb63eb] 2025-06-02 12:35:18.402149 | orchestrator | 12:35:18.401 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:18.788091 | orchestrator | 12:35:18.787 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=493b38c3-800b-4bdc-b3d6-44caceeab7e6] 2025-06-02 12:35:18.797972 | orchestrator | 12:35:18.797 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 12:35:19.602249 | orchestrator | 12:35:19.601 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 12s [id=e02a16cd-728d-4eb0-948c-a555d0421b3a] 2025-06-02 12:35:22.893635 | orchestrator | 12:35:22.893 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 12:35:22.899624 | orchestrator | 12:35:22.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 12:35:22.919009 | orchestrator | 12:35:22.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 12:35:22.922324 | orchestrator | 12:35:22.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 12:35:22.926573 | orchestrator | 12:35:22.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:22.962930 | orchestrator | 12:35:22.962 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 12:35:23.271974 | orchestrator | 12:35:23.271 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5] 2025-06-02 12:35:23.295599 | orchestrator | 12:35:23.295 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=b74c4224-3b45-4fa7-a33d-9e64f92a9cf7] 2025-06-02 12:35:23.328247 | orchestrator | 12:35:23.327 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=76bfcd68-93f4-43fc-a7a6-b1d272437959] 2025-06-02 12:35:23.519320 | orchestrator | 12:35:23.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=430962b1-bfac-488d-a447-b0298874a3fa] 2025-06-02 12:35:23.519878 | orchestrator | 12:35:23.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=b5181ae0-889a-48f6-853e-904cf79da0d2] 2025-06-02 12:35:23.536235 | orchestrator | 12:35:23.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=35465401-401c-49c9-ae8f-f7b96b89b216] 2025-06-02 12:35:26.352813 | orchestrator | 12:35:26.352 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=90ed6099-5aa2-4936-9f9d-cbf1bddc4f31] 2025-06-02 12:35:26.361526 | orchestrator | 12:35:26.361 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 12:35:26.362710 | orchestrator | 12:35:26.362 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 12:35:26.371769 | orchestrator | 12:35:26.371 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 12:35:26.567054 | orchestrator | 12:35:26.566 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=224d58f3-d798-4fa3-9d7e-a8976118b0bd] 2025-06-02 12:35:26.581835 | orchestrator | 12:35:26.581 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 12:35:26.585235 | orchestrator | 12:35:26.584 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 12:35:26.585297 | orchestrator | 12:35:26.585 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 12:35:26.585429 | orchestrator | 12:35:26.585 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 12:35:26.592593 | orchestrator | 12:35:26.592 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 12:35:26.603485 | orchestrator | 12:35:26.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 12:35:26.604261 | orchestrator | 12:35:26.604 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=a7628b40-4b42-4498-8335-ed53165bc583] 2025-06-02 12:35:26.608027 | orchestrator | 12:35:26.607 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 12:35:26.608124 | orchestrator | 12:35:26.608 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 12:35:26.619556 | orchestrator | 12:35:26.619 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 12:35:26.735229 | orchestrator | 12:35:26.734 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=3d75d0f3-eab8-489d-80d7-0ee8ca6e5246] 2025-06-02 12:35:26.750351 | orchestrator | 12:35:26.750 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 12:35:26.902138 | orchestrator | 12:35:26.901 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=0e5d524d-fc49-4f0a-a743-8f26a029b2a5] 2025-06-02 12:35:26.912056 | orchestrator | 12:35:26.911 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 12:35:27.048309 | orchestrator | 12:35:27.047 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=d4a50104-d8b6-427f-b94c-d8103d259d34] 2025-06-02 12:35:27.054700 | orchestrator | 12:35:27.054 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 12:35:27.223348 | orchestrator | 12:35:27.222 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=17ed722e-bf02-4a91-b937-c7727612d20a] 2025-06-02 12:35:27.230957 | orchestrator | 12:35:27.230 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 12:35:27.337565 | orchestrator | 12:35:27.337 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a65050ea-7932-4a45-89f6-8ada6665fed2] 2025-06-02 12:35:27.346813 | orchestrator | 12:35:27.346 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 12:35:27.497009 | orchestrator | 12:35:27.496 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=11c756f4-1c20-4285-8fa7-0e46bdac2a89] 2025-06-02 12:35:27.504817 | orchestrator | 12:35:27.504 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 12:35:27.637677 | orchestrator | 12:35:27.637 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=ae536ce0-0a09-44ca-8fde-8ba55d117eb7] 2025-06-02 12:35:27.652307 | orchestrator | 12:35:27.652 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 12:35:27.810449 | orchestrator | 12:35:27.809 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=1acb5f76-7ddc-4cab-b720-4871c9e816c1] 2025-06-02 12:35:27.962832 | orchestrator | 12:35:27.962 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=97ed4e44-15a6-47ec-aee4-3fd23306ffad] 2025-06-02 12:35:32.296446 | orchestrator | 12:35:32.295 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=a2d9d36e-8956-422c-8ea8-cae0175fca58] 2025-06-02 12:35:32.322673 | orchestrator | 12:35:32.322 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=27d52891-816a-4436-9d38-6c75f2ca3ca7] 2025-06-02 12:35:32.352700 | orchestrator | 12:35:32.352 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=18b6975f-aa8f-4d8e-961b-05124e7aa01e] 2025-06-02 12:35:32.390634 | orchestrator | 12:35:32.390 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=c9afcc7f-6464-480d-b0e2-a4d796f6af1c] 2025-06-02 12:35:32.566909 | orchestrator | 12:35:32.566 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=ce05e247-cd50-4647-abdd-fd7238bd2806] 2025-06-02 12:35:32.603245 | orchestrator | 12:35:32.602 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=9d539432-5ee1-4dc5-844c-57c75767245f] 2025-06-02 12:35:33.313351 | orchestrator | 12:35:33.312 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=d9f2a123-a8c5-4b3a-b6e6-cb41e865dd5e] 2025-06-02 12:35:33.958348 | orchestrator | 12:35:33.957 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=6ee3b9d2-c011-4c19-a29c-42ae62838d6b] 2025-06-02 12:35:33.981624 | orchestrator | 12:35:33.981 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 12:35:33.998307 | orchestrator | 12:35:33.998 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 12:35:33.998364 | orchestrator | 12:35:33.998 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 12:35:34.010625 | orchestrator | 12:35:34.010 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 12:35:34.010728 | orchestrator | 12:35:34.010 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 12:35:34.013401 | orchestrator | 12:35:34.013 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 12:35:34.022757 | orchestrator | 12:35:34.022 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 12:35:40.702544 | orchestrator | 12:35:40.702 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=90a5c6f4-8f11-4782-b446-4e2d3c29e7e6] 2025-06-02 12:35:40.713087 | orchestrator | 12:35:40.712 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 12:35:40.718643 | orchestrator | 12:35:40.718 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 12:35:40.718687 | orchestrator | 12:35:40.718 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 12:35:40.724617 | orchestrator | 12:35:40.724 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de6ed903c096f9082c041dbb510fc895949887aa] 2025-06-02 12:35:40.729917 | orchestrator | 12:35:40.729 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=508a8c5b0c57f3546f59f323dbf776e3dd6f7570] 2025-06-02 12:35:41.417159 | orchestrator | 12:35:41.416 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=90a5c6f4-8f11-4782-b446-4e2d3c29e7e6] 2025-06-02 12:35:43.999278 | orchestrator | 12:35:43.998 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 12:35:44.001432 | orchestrator | 12:35:44.001 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 12:35:44.018685 | orchestrator | 12:35:44.018 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 12:35:44.018851 | orchestrator | 12:35:44.018 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 12:35:44.020823 | orchestrator | 12:35:44.020 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 12:35:44.025132 | orchestrator | 12:35:44.024 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 12:35:53.999664 | orchestrator | 12:35:53.999 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 12:35:54.001645 | orchestrator | 12:35:54.001 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 12:35:54.019550 | orchestrator | 12:35:54.019 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 12:35:54.019797 | orchestrator | 12:35:54.019 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 12:35:54.021662 | orchestrator | 12:35:54.021 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 12:35:54.026065 | orchestrator | 12:35:54.025 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 12:35:54.508427 | orchestrator | 12:35:54.508 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=651f580c-9b9c-42b9-a5ca-7e62e3255da0] 2025-06-02 12:35:54.529150 | orchestrator | 12:35:54.528 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=2c2eeb45-6038-4400-a347-8947d04531e2] 2025-06-02 12:35:54.540567 | orchestrator | 12:35:54.540 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=011a9a86-6f96-48ca-b69d-f10735ad328a] 2025-06-02 12:35:54.666353 | orchestrator | 12:35:54.665 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=7af915d0-233f-4623-855e-ea334cdd4887] 2025-06-02 12:36:04.001756 | orchestrator | 12:36:04.001 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-02 12:36:04.026882 | orchestrator | 12:36:04.026 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 12:36:04.674851 | orchestrator | 12:36:04.674 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=9d1c6f33-f0e6-47d4-b427-6da06b1eb8dc] 2025-06-02 12:36:04.850273 | orchestrator | 12:36:04.849 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=030619bb-efd2-4c0f-a3dd-ab36a12770d3] 2025-06-02 12:36:04.879725 | orchestrator | 12:36:04.879 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 12:36:04.882330 | orchestrator | 12:36:04.882 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 12:36:04.883448 | orchestrator | 12:36:04.883 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 12:36:04.884810 | orchestrator | 12:36:04.884 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 12:36:04.888930 | orchestrator | 12:36:04.888 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 12:36:04.894621 | orchestrator | 12:36:04.894 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3478137773588502522] 2025-06-02 12:36:04.898427 | orchestrator | 12:36:04.898 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 12:36:04.898706 | orchestrator | 12:36:04.898 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 12:36:04.898909 | orchestrator | 12:36:04.898 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 12:36:04.899168 | orchestrator | 12:36:04.899 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 12:36:04.908600 | orchestrator | 12:36:04.908 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 12:36:04.922722 | orchestrator | 12:36:04.922 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 12:36:10.170472 | orchestrator | 12:36:10.169 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=011a9a86-6f96-48ca-b69d-f10735ad328a/efdd6e96-769c-48d5-86b4-ee9af75744a8] 2025-06-02 12:36:10.242937 | orchestrator | 12:36:10.242 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=651f580c-9b9c-42b9-a5ca-7e62e3255da0/58632b91-4ff4-425f-9799-2cbdbd75f857] 2025-06-02 12:36:10.281100 | orchestrator | 12:36:10.280 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=2c2eeb45-6038-4400-a347-8947d04531e2/23117054-a818-47a4-b6cc-218c8fcf9ce0] 2025-06-02 12:36:10.314405 | orchestrator | 12:36:10.313 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=651f580c-9b9c-42b9-a5ca-7e62e3255da0/3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b] 2025-06-02 12:36:10.315331 | orchestrator | 12:36:10.314 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=2c2eeb45-6038-4400-a347-8947d04531e2/456d640a-c6eb-4569-8c8e-a4a3fdd3e000] 2025-06-02 12:36:10.489085 | orchestrator | 12:36:10.488 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=651f580c-9b9c-42b9-a5ca-7e62e3255da0/d9b7d288-6907-4dde-a5ec-8795086443a7] 2025-06-02 12:36:10.498221 | orchestrator | 12:36:10.497 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=2c2eeb45-6038-4400-a347-8947d04531e2/f20c7008-f12c-46ab-b284-b84010eb63eb] 2025-06-02 12:36:10.518183 | orchestrator | 12:36:10.517 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=011a9a86-6f96-48ca-b69d-f10735ad328a/dc6882bf-da04-4edd-9882-73e1f985245e] 2025-06-02 12:36:10.526061 | orchestrator | 12:36:10.525 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=011a9a86-6f96-48ca-b69d-f10735ad328a/fa9eac55-b7ba-400b-ad39-8d51d062dfbf] 2025-06-02 12:36:14.926135 | orchestrator | 12:36:14.925 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 12:36:24.926199 | orchestrator | 12:36:24.925 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 12:36:25.227941 | orchestrator | 12:36:25.227 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=79d5c08e-69b4-4529-bb56-989e52175e59] 2025-06-02 12:36:25.250728 | orchestrator | 12:36:25.250 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 12:36:25.250834 | orchestrator | 12:36:25.250 STDOUT terraform: Outputs: 2025-06-02 12:36:25.250877 | orchestrator | 12:36:25.250 STDOUT terraform: manager_address = 2025-06-02 12:36:25.250912 | orchestrator | 12:36:25.250 STDOUT terraform: private_key = 2025-06-02 12:36:25.570952 | orchestrator | ok: Runtime: 0:01:33.377183 2025-06-02 12:36:25.608471 | 2025-06-02 12:36:25.608668 | TASK [Create infrastructure (stable)] 2025-06-02 12:36:26.147393 | orchestrator | skipping: Conditional result was False 2025-06-02 12:36:26.164102 | 2025-06-02 12:36:26.164305 | TASK [Fetch manager address] 2025-06-02 12:36:26.601147 | orchestrator | ok 2025-06-02 12:36:26.608338 | 2025-06-02 12:36:26.608454 | TASK [Set manager_host address] 2025-06-02 12:36:26.674878 | orchestrator | ok 2025-06-02 12:36:26.683591 | 2025-06-02 12:36:26.683718 | LOOP [Update ansible collections] 2025-06-02 12:36:27.602233 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 12:36:27.602581 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:36:27.602635 | orchestrator | Starting galaxy collection install process 2025-06-02 12:36:27.602671 | orchestrator | Process install dependency map 2025-06-02 12:36:27.602702 | orchestrator | Starting collection install process 2025-06-02 12:36:27.602731 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 12:36:27.602765 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-06-02 12:36:27.602798 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 12:36:27.602891 | orchestrator | ok: Item: commons Runtime: 0:00:00.586548 2025-06-02 12:36:28.491498 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:36:28.491742 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 12:36:28.491828 | orchestrator | Starting galaxy collection install process 2025-06-02 12:36:28.491894 | orchestrator | Process install dependency map 2025-06-02 12:36:28.491955 | orchestrator | Starting collection install process 2025-06-02 12:36:28.492011 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-06-02 12:36:28.492093 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-06-02 12:36:28.492149 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 12:36:28.492235 | orchestrator | ok: Item: services Runtime: 0:00:00.614737 2025-06-02 12:36:28.503388 | 2025-06-02 12:36:28.503576 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 12:36:39.087300 | orchestrator | ok 2025-06-02 12:36:39.098626 | 2025-06-02 12:36:39.098774 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 12:37:39.143263 | orchestrator | ok 2025-06-02 12:37:39.154223 | 2025-06-02 12:37:39.154356 | TASK [Fetch manager ssh hostkey] 2025-06-02 12:37:40.740111 | orchestrator | Output suppressed because no_log was given 2025-06-02 12:37:40.749039 | 2025-06-02 12:37:40.749203 | TASK [Get ssh keypair from terraform environment] 2025-06-02 12:37:41.281520 | orchestrator | ok: Runtime: 0:00:00.009436 2025-06-02 12:37:41.297574 | 2025-06-02 12:37:41.297737 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 12:37:41.328439 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 12:37:41.335291 | 2025-06-02 12:37:41.335403 | TASK [Run manager part 0] 2025-06-02 12:37:42.337725 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:37:42.380997 | orchestrator | 2025-06-02 12:37:42.381048 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 12:37:42.381055 | orchestrator | 2025-06-02 12:37:42.381069 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 12:37:44.174400 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:44.174474 | orchestrator | 2025-06-02 12:37:44.174498 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 12:37:44.174509 | orchestrator | 2025-06-02 12:37:44.174519 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:37:46.186418 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:46.186579 | orchestrator | 2025-06-02 12:37:46.186587 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 12:37:46.822468 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:46.822516 | orchestrator | 2025-06-02 12:37:46.822523 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 12:37:46.874730 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:46.874803 | orchestrator | 2025-06-02 12:37:46.874817 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 12:37:46.914846 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:46.914905 | orchestrator | 2025-06-02 12:37:46.914915 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 12:37:46.957290 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:46.957343 | orchestrator | 2025-06-02 12:37:46.957350 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 12:37:46.987690 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:46.987743 | orchestrator | 2025-06-02 12:37:46.987751 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 12:37:47.019473 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:47.019524 | orchestrator | 2025-06-02 12:37:47.019530 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 12:37:47.047050 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:47.047097 | orchestrator | 2025-06-02 12:37:47.047105 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 12:37:47.078418 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:47.078460 | orchestrator | 2025-06-02 12:37:47.078468 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 12:37:47.860371 | orchestrator | changed: [testbed-manager] 2025-06-02 12:37:47.860445 | orchestrator | 2025-06-02 12:37:47.860454 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 12:40:43.357635 | orchestrator | changed: [testbed-manager] 2025-06-02 12:40:43.359168 | orchestrator | 2025-06-02 12:40:43.359204 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 12:41:54.432742 | orchestrator | changed: [testbed-manager] 2025-06-02 12:41:54.432847 | orchestrator | 2025-06-02 12:41:54.432862 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 12:42:16.103677 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:16.103773 | orchestrator | 2025-06-02 12:42:16.103793 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 12:42:24.542294 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:24.542380 | orchestrator | 2025-06-02 12:42:24.542397 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 12:42:24.589619 | orchestrator | ok: [testbed-manager] 2025-06-02 12:42:24.589704 | orchestrator | 2025-06-02 12:42:24.589719 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 12:42:25.372968 | orchestrator | ok: [testbed-manager] 2025-06-02 12:42:25.373058 | orchestrator | 2025-06-02 12:42:25.373075 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 12:42:26.112042 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:26.112171 | orchestrator | 2025-06-02 12:42:26.112189 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 12:42:32.391644 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:32.391681 | orchestrator | 2025-06-02 12:42:32.391701 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 12:42:38.066437 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:38.066524 | orchestrator | 2025-06-02 12:42:38.066542 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 12:42:40.537344 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:40.537427 | orchestrator | 2025-06-02 12:42:40.537443 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 12:42:42.236588 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:42.236662 | orchestrator | 2025-06-02 12:42:42.236675 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 12:42:43.330851 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 12:42:43.330901 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 12:42:43.330909 | orchestrator | 2025-06-02 12:42:43.330916 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 12:42:43.379303 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 12:42:43.379380 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 12:42:43.379394 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 12:42:43.379407 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 12:42:48.131746 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 12:42:48.131785 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 12:42:48.131791 | orchestrator | 2025-06-02 12:42:48.131798 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 12:42:48.696368 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:48.696436 | orchestrator | 2025-06-02 12:42:48.696451 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 12:45:10.928594 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 12:45:10.928642 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 12:45:10.928651 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 12:45:10.928658 | orchestrator | 2025-06-02 12:45:10.928664 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 12:45:13.199049 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 12:45:13.199133 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 12:45:13.199147 | orchestrator | 2025-06-02 12:45:13.199160 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 12:45:13.199172 | orchestrator | 2025-06-02 12:45:13.199184 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:45:14.624038 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:14.624076 | orchestrator | 2025-06-02 12:45:14.624083 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 12:45:14.677179 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:14.677228 | orchestrator | 2025-06-02 12:45:14.677241 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 12:45:14.755418 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:14.755457 | orchestrator | 2025-06-02 12:45:14.755466 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 12:45:15.505188 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:15.505277 | orchestrator | 2025-06-02 12:45:15.505294 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 12:45:16.213504 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:16.213599 | orchestrator | 2025-06-02 12:45:16.213617 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 12:45:17.549516 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 12:45:17.549608 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 12:45:17.549624 | orchestrator | 2025-06-02 12:45:17.549655 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 12:45:18.939651 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:18.939764 | orchestrator | 2025-06-02 12:45:18.939783 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 12:45:20.700235 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:45:20.700319 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 12:45:20.700334 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:45:20.700346 | orchestrator | 2025-06-02 12:45:20.700359 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 12:45:21.265878 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:21.265970 | orchestrator | 2025-06-02 12:45:21.265985 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 12:45:21.355300 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:21.355386 | orchestrator | 2025-06-02 12:45:21.355401 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 12:45:22.233116 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:45:22.233197 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:22.233211 | orchestrator | 2025-06-02 12:45:22.233222 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 12:45:22.271916 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:22.271977 | orchestrator | 2025-06-02 12:45:22.271986 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 12:45:22.306544 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:22.306618 | orchestrator | 2025-06-02 12:45:22.306631 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 12:45:22.345172 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:22.345233 | orchestrator | 2025-06-02 12:45:22.345244 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 12:45:22.398920 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:22.398986 | orchestrator | 2025-06-02 12:45:22.399000 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 12:45:23.117324 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:23.117409 | orchestrator | 2025-06-02 12:45:23.117425 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 12:45:23.117438 | orchestrator | 2025-06-02 12:45:23.117452 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:45:24.490384 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:24.490474 | orchestrator | 2025-06-02 12:45:24.490489 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 12:45:25.436677 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:25.436743 | orchestrator | 2025-06-02 12:45:25.436761 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:45:25.436772 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 12:45:25.436781 | orchestrator | 2025-06-02 12:45:25.659902 | orchestrator | ok: Runtime: 0:07:43.898939 2025-06-02 12:45:25.675788 | 2025-06-02 12:45:25.675929 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 12:45:25.722186 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 12:45:25.731175 | 2025-06-02 12:45:25.731294 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 12:45:25.768115 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 12:45:25.778504 | 2025-06-02 12:45:25.778633 | TASK [Run manager part 1 + 2] 2025-06-02 12:45:26.610345 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:45:26.666472 | orchestrator | 2025-06-02 12:45:26.666523 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 12:45:26.666530 | orchestrator | 2025-06-02 12:45:26.666542 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:45:29.507248 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:29.507299 | orchestrator | 2025-06-02 12:45:29.507321 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 12:45:29.542585 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:29.542629 | orchestrator | 2025-06-02 12:45:29.542638 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 12:45:29.577675 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:29.577722 | orchestrator | 2025-06-02 12:45:29.577730 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:45:29.610076 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:29.610121 | orchestrator | 2025-06-02 12:45:29.610129 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:45:29.685675 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:29.685733 | orchestrator | 2025-06-02 12:45:29.685744 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:45:29.746914 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:29.746971 | orchestrator | 2025-06-02 12:45:29.746981 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:45:29.793283 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 12:45:29.793327 | orchestrator | 2025-06-02 12:45:29.793332 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:45:30.490519 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:30.490568 | orchestrator | 2025-06-02 12:45:30.490577 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:45:30.534547 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:30.534593 | orchestrator | 2025-06-02 12:45:30.534600 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:45:31.874494 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:31.874556 | orchestrator | 2025-06-02 12:45:31.874568 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:45:32.455229 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:32.455397 | orchestrator | 2025-06-02 12:45:32.455412 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:45:33.561839 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:33.561895 | orchestrator | 2025-06-02 12:45:33.561904 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:45:45.619581 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:45.619657 | orchestrator | 2025-06-02 12:45:45.619673 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 12:45:46.266629 | orchestrator | ok: [testbed-manager] 2025-06-02 12:45:46.266717 | orchestrator | 2025-06-02 12:45:46.266735 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 12:45:46.321254 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:45:46.321321 | orchestrator | 2025-06-02 12:45:46.321334 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 12:45:47.259635 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:47.259722 | orchestrator | 2025-06-02 12:45:47.259740 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 12:45:48.187243 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:48.187442 | orchestrator | 2025-06-02 12:45:48.187460 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 12:45:48.748336 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:48.748419 | orchestrator | 2025-06-02 12:45:48.748436 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 12:45:48.790388 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 12:45:48.790504 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 12:45:48.790521 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 12:45:48.790536 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 12:45:51.852680 | orchestrator | changed: [testbed-manager] 2025-06-02 12:45:51.852731 | orchestrator | 2025-06-02 12:45:51.852740 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 12:46:00.454160 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 12:46:00.454261 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 12:46:00.454280 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 12:46:00.454292 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 12:46:00.454313 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 12:46:00.454325 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 12:46:00.454336 | orchestrator | 2025-06-02 12:46:00.454349 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 12:46:01.481353 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:01.481398 | orchestrator | 2025-06-02 12:46:01.481407 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 12:46:01.522649 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:01.522689 | orchestrator | 2025-06-02 12:46:01.522698 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 12:46:04.587896 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:04.587935 | orchestrator | 2025-06-02 12:46:04.587943 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 12:46:04.630258 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:04.630324 | orchestrator | 2025-06-02 12:46:04.630340 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 12:47:37.786394 | orchestrator | changed: [testbed-manager] 2025-06-02 12:47:37.786430 | orchestrator | 2025-06-02 12:47:37.786437 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 12:47:38.882834 | orchestrator | ok: [testbed-manager] 2025-06-02 12:47:38.882921 | orchestrator | 2025-06-02 12:47:38.882939 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:47:38.882955 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 12:47:38.882968 | orchestrator | 2025-06-02 12:47:39.408383 | orchestrator | ok: Runtime: 0:02:12.888381 2025-06-02 12:47:39.425330 | 2025-06-02 12:47:39.425512 | TASK [Reboot manager] 2025-06-02 12:47:40.970043 | orchestrator | ok: Runtime: 0:00:00.964967 2025-06-02 12:47:40.987288 | 2025-06-02 12:47:40.987492 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 12:47:55.800624 | orchestrator | ok 2025-06-02 12:47:55.812886 | 2025-06-02 12:47:55.813057 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 12:48:55.879127 | orchestrator | ok 2025-06-02 12:48:55.886708 | 2025-06-02 12:48:55.886826 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 12:48:58.342826 | orchestrator | 2025-06-02 12:48:58.343041 | orchestrator | # DEPLOY MANAGER 2025-06-02 12:48:58.343080 | orchestrator | 2025-06-02 12:48:58.343097 | orchestrator | + set -e 2025-06-02 12:48:58.343111 | orchestrator | + echo 2025-06-02 12:48:58.343126 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 12:48:58.343144 | orchestrator | + echo 2025-06-02 12:48:58.343209 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 12:48:58.345455 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 12:48:58.345575 | orchestrator | 2025-06-02 12:48:58.345595 | orchestrator | export CEPH_VERSION=reef 2025-06-02 12:48:58.345611 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 12:48:58.345624 | orchestrator | export MANAGER_VERSION=latest 2025-06-02 12:48:58.345652 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 12:48:58.345663 | orchestrator | 2025-06-02 12:48:58.345682 | orchestrator | export ARA=false 2025-06-02 12:48:58.345694 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 12:48:58.345712 | orchestrator | export TEMPEST=false 2025-06-02 12:48:58.345724 | orchestrator | export IS_ZUUL=true 2025-06-02 12:48:58.345736 | orchestrator | 2025-06-02 12:48:58.345755 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 12:48:58.345767 | orchestrator | export EXTERNAL_API=false 2025-06-02 12:48:58.345778 | orchestrator | 2025-06-02 12:48:58.345789 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 12:48:58.345803 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 12:48:58.345815 | orchestrator | 2025-06-02 12:48:58.345826 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 12:48:58.345849 | orchestrator | 2025-06-02 12:48:58.345861 | orchestrator | + echo 2025-06-02 12:48:58.345873 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 12:48:58.346633 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 12:48:58.346660 | orchestrator | ++ INTERACTIVE=false 2025-06-02 12:48:58.346674 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 12:48:58.346715 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 12:48:58.346733 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 12:48:58.346747 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 12:48:58.346782 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 12:48:58.346794 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 12:48:58.346806 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 12:48:58.346840 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 12:48:58.346854 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 12:48:58.346868 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 12:48:58.346881 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 12:48:58.346893 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 12:48:58.346938 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 12:48:58.346949 | orchestrator | ++ export ARA=false 2025-06-02 12:48:58.346961 | orchestrator | ++ ARA=false 2025-06-02 12:48:58.346971 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 12:48:58.346982 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 12:48:58.346997 | orchestrator | ++ export TEMPEST=false 2025-06-02 12:48:58.347008 | orchestrator | ++ TEMPEST=false 2025-06-02 12:48:58.347019 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 12:48:58.347030 | orchestrator | ++ IS_ZUUL=true 2025-06-02 12:48:58.347041 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 12:48:58.347052 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 12:48:58.347064 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 12:48:58.347074 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 12:48:58.347085 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 12:48:58.347096 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 12:48:58.347107 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 12:48:58.347118 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 12:48:58.347129 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 12:48:58.347140 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 12:48:58.347151 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 12:48:58.402325 | orchestrator | + docker version 2025-06-02 12:48:58.664874 | orchestrator | Client: Docker Engine - Community 2025-06-02 12:48:58.665004 | orchestrator | Version: 27.5.1 2025-06-02 12:48:58.665024 | orchestrator | API version: 1.47 2025-06-02 12:48:58.665036 | orchestrator | Go version: go1.22.11 2025-06-02 12:48:58.665047 | orchestrator | Git commit: 9f9e405 2025-06-02 12:48:58.665059 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 12:48:58.665072 | orchestrator | OS/Arch: linux/amd64 2025-06-02 12:48:58.665082 | orchestrator | Context: default 2025-06-02 12:48:58.665094 | orchestrator | 2025-06-02 12:48:58.665105 | orchestrator | Server: Docker Engine - Community 2025-06-02 12:48:58.665117 | orchestrator | Engine: 2025-06-02 12:48:58.665127 | orchestrator | Version: 27.5.1 2025-06-02 12:48:58.665137 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 12:48:58.665176 | orchestrator | Go version: go1.22.11 2025-06-02 12:48:58.665193 | orchestrator | Git commit: 4c9b3b0 2025-06-02 12:48:58.665208 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 12:48:58.665224 | orchestrator | OS/Arch: linux/amd64 2025-06-02 12:48:58.665239 | orchestrator | Experimental: false 2025-06-02 12:48:58.665254 | orchestrator | containerd: 2025-06-02 12:48:58.665271 | orchestrator | Version: 1.7.27 2025-06-02 12:48:58.665287 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 12:48:58.665304 | orchestrator | runc: 2025-06-02 12:48:58.665321 | orchestrator | Version: 1.2.5 2025-06-02 12:48:58.665337 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 12:48:58.665353 | orchestrator | docker-init: 2025-06-02 12:48:58.665369 | orchestrator | Version: 0.19.0 2025-06-02 12:48:58.665386 | orchestrator | GitCommit: de40ad0 2025-06-02 12:48:58.669064 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 12:48:58.677252 | orchestrator | + set -e 2025-06-02 12:48:58.677355 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 12:48:58.677382 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 12:48:58.677402 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 12:48:58.677421 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 12:48:58.677438 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 12:48:58.677456 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 12:48:58.677477 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 12:48:58.677496 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 12:48:58.677515 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 12:48:58.677535 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 12:48:58.677584 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 12:48:58.677604 | orchestrator | ++ export ARA=false 2025-06-02 12:48:58.677622 | orchestrator | ++ ARA=false 2025-06-02 12:48:58.677640 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 12:48:58.677659 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 12:48:58.677677 | orchestrator | ++ export TEMPEST=false 2025-06-02 12:48:58.677695 | orchestrator | ++ TEMPEST=false 2025-06-02 12:48:58.677714 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 12:48:58.677733 | orchestrator | ++ IS_ZUUL=true 2025-06-02 12:48:58.677750 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 12:48:58.677768 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 12:48:58.677786 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 12:48:58.677804 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 12:48:58.677823 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 12:48:58.677841 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 12:48:58.677861 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 12:48:58.677879 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 12:48:58.677899 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 12:48:58.677919 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 12:48:58.677939 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 12:48:58.677959 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 12:48:58.677980 | orchestrator | ++ INTERACTIVE=false 2025-06-02 12:48:58.677999 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 12:48:58.678099 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 12:48:58.678126 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 12:48:58.678410 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 12:48:58.678445 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-02 12:48:58.684680 | orchestrator | + set -e 2025-06-02 12:48:58.684737 | orchestrator | + VERSION=reef 2025-06-02 12:48:58.685518 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:48:58.691906 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-02 12:48:58.692006 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:48:58.698810 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-02 12:48:58.705958 | orchestrator | + set -e 2025-06-02 12:48:58.706400 | orchestrator | + VERSION=2024.2 2025-06-02 12:48:58.707455 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:48:58.712865 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-02 12:48:58.712920 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:48:58.717909 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 12:48:58.719106 | orchestrator | ++ semver latest 7.0.0 2025-06-02 12:48:58.783080 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 12:48:58.783170 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 12:48:58.783184 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 12:48:58.783208 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 12:48:58.822942 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:48:58.825333 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 12:48:58.826576 | orchestrator | ++ deactivate nondestructive 2025-06-02 12:48:58.826603 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:48:58.826615 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:48:58.826626 | orchestrator | ++ hash -r 2025-06-02 12:48:58.826638 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:48:58.826649 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 12:48:58.826660 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 12:48:58.826676 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 12:48:58.826689 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 12:48:58.826703 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 12:48:58.826888 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 12:48:58.826905 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 12:48:58.826917 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:48:58.826929 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:48:58.826940 | orchestrator | ++ export PATH 2025-06-02 12:48:58.827031 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:48:58.827130 | orchestrator | ++ '[' -z '' ']' 2025-06-02 12:48:58.827145 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 12:48:58.827160 | orchestrator | ++ PS1='(venv) ' 2025-06-02 12:48:58.827171 | orchestrator | ++ export PS1 2025-06-02 12:48:58.827182 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 12:48:58.827193 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 12:48:58.827204 | orchestrator | ++ hash -r 2025-06-02 12:48:58.827870 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 12:49:00.038103 | orchestrator | 2025-06-02 12:49:00.038192 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 12:49:00.038201 | orchestrator | 2025-06-02 12:49:00.038208 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:49:00.683211 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:00.683348 | orchestrator | 2025-06-02 12:49:00.683365 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 12:49:01.684231 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:01.684352 | orchestrator | 2025-06-02 12:49:01.684370 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 12:49:01.684393 | orchestrator | 2025-06-02 12:49:01.684405 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:49:04.031356 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:04.031495 | orchestrator | 2025-06-02 12:49:04.031526 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 12:49:04.097667 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:04.097815 | orchestrator | 2025-06-02 12:49:04.097850 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 12:49:04.545281 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:04.545412 | orchestrator | 2025-06-02 12:49:04.545437 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 12:49:04.587329 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:04.587422 | orchestrator | 2025-06-02 12:49:04.587437 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 12:49:04.922141 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:04.922255 | orchestrator | 2025-06-02 12:49:04.922272 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 12:49:04.978175 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:04.978259 | orchestrator | 2025-06-02 12:49:04.978276 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 12:49:05.323395 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:05.323510 | orchestrator | 2025-06-02 12:49:05.323526 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 12:49:05.443603 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:05.443695 | orchestrator | 2025-06-02 12:49:05.443711 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 12:49:05.443724 | orchestrator | 2025-06-02 12:49:05.443739 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:49:07.207100 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:07.207213 | orchestrator | 2025-06-02 12:49:07.207230 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 12:49:07.322280 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 12:49:07.322378 | orchestrator | 2025-06-02 12:49:07.322392 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 12:49:07.383688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 12:49:07.383775 | orchestrator | 2025-06-02 12:49:07.383789 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 12:49:08.486701 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 12:49:08.486801 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 12:49:08.486816 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 12:49:08.486828 | orchestrator | 2025-06-02 12:49:08.486841 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 12:49:10.255041 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 12:49:10.255154 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 12:49:10.255173 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 12:49:10.255185 | orchestrator | 2025-06-02 12:49:10.255198 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 12:49:10.868442 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:49:10.868609 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:10.868628 | orchestrator | 2025-06-02 12:49:10.868641 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 12:49:11.503071 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:49:11.503922 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:11.503957 | orchestrator | 2025-06-02 12:49:11.503973 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 12:49:11.560865 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:11.560933 | orchestrator | 2025-06-02 12:49:11.560947 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 12:49:11.917682 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:11.917782 | orchestrator | 2025-06-02 12:49:11.917797 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 12:49:11.977787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 12:49:11.977864 | orchestrator | 2025-06-02 12:49:11.977884 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 12:49:13.033721 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:13.033833 | orchestrator | 2025-06-02 12:49:13.033850 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 12:49:13.815004 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:13.815109 | orchestrator | 2025-06-02 12:49:13.815125 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 12:49:25.373021 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:25.373202 | orchestrator | 2025-06-02 12:49:25.373222 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 12:49:25.418814 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:25.418899 | orchestrator | 2025-06-02 12:49:25.418911 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 12:49:25.418921 | orchestrator | 2025-06-02 12:49:25.418929 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:49:28.224011 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:28.224117 | orchestrator | 2025-06-02 12:49:28.224160 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 12:49:28.328670 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 12:49:28.328770 | orchestrator | 2025-06-02 12:49:28.328785 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 12:49:28.385355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:49:28.385476 | orchestrator | 2025-06-02 12:49:28.385494 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 12:49:30.751584 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:30.751691 | orchestrator | 2025-06-02 12:49:30.751709 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 12:49:30.800011 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:30.800091 | orchestrator | 2025-06-02 12:49:30.800108 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 12:49:30.924065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 12:49:30.924157 | orchestrator | 2025-06-02 12:49:30.924172 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 12:49:33.691895 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 12:49:33.692005 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 12:49:33.692022 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 12:49:33.692034 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 12:49:33.692046 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 12:49:33.692057 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 12:49:33.692069 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 12:49:33.692080 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 12:49:33.692092 | orchestrator | 2025-06-02 12:49:33.692105 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 12:49:34.316948 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:34.317073 | orchestrator | 2025-06-02 12:49:34.317092 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 12:49:34.951029 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:34.951149 | orchestrator | 2025-06-02 12:49:34.951165 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 12:49:35.040317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 12:49:35.040424 | orchestrator | 2025-06-02 12:49:35.040440 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 12:49:36.280222 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 12:49:36.281208 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 12:49:36.281255 | orchestrator | 2025-06-02 12:49:36.281270 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 12:49:36.879471 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:36.879637 | orchestrator | 2025-06-02 12:49:36.879654 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 12:49:36.927156 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:36.927256 | orchestrator | 2025-06-02 12:49:36.927274 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 12:49:36.989610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 12:49:36.989713 | orchestrator | 2025-06-02 12:49:36.989737 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 12:49:38.291303 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:49:38.291411 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:49:38.291427 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:38.291440 | orchestrator | 2025-06-02 12:49:38.291453 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 12:49:38.910368 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:38.910473 | orchestrator | 2025-06-02 12:49:38.910490 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 12:49:38.966303 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:38.966388 | orchestrator | 2025-06-02 12:49:38.966404 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 12:49:39.057416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 12:49:39.057570 | orchestrator | 2025-06-02 12:49:39.057590 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 12:49:39.567402 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:39.567559 | orchestrator | 2025-06-02 12:49:39.567577 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 12:49:39.957185 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:39.957277 | orchestrator | 2025-06-02 12:49:39.957285 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 12:49:41.120760 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 12:49:41.120868 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 12:49:41.120882 | orchestrator | 2025-06-02 12:49:41.120895 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 12:49:41.733258 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:41.733366 | orchestrator | 2025-06-02 12:49:41.733381 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 12:49:42.120132 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:42.120231 | orchestrator | 2025-06-02 12:49:42.120246 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 12:49:42.455708 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:42.455807 | orchestrator | 2025-06-02 12:49:42.455823 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 12:49:42.503096 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:42.503128 | orchestrator | 2025-06-02 12:49:42.503140 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 12:49:42.585373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 12:49:42.585452 | orchestrator | 2025-06-02 12:49:42.585469 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 12:49:42.641620 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:42.641685 | orchestrator | 2025-06-02 12:49:42.641698 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 12:49:44.625024 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 12:49:44.625141 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 12:49:44.625158 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 12:49:44.625170 | orchestrator | 2025-06-02 12:49:44.625183 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 12:49:45.323247 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:45.323353 | orchestrator | 2025-06-02 12:49:45.323370 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 12:49:46.014180 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:46.014290 | orchestrator | 2025-06-02 12:49:46.014307 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 12:49:46.700285 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:46.700407 | orchestrator | 2025-06-02 12:49:46.700435 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 12:49:46.771657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 12:49:46.771759 | orchestrator | 2025-06-02 12:49:46.771774 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 12:49:46.817354 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:46.817452 | orchestrator | 2025-06-02 12:49:46.817468 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 12:49:47.463426 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 12:49:47.463586 | orchestrator | 2025-06-02 12:49:47.463603 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 12:49:47.552771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 12:49:47.552859 | orchestrator | 2025-06-02 12:49:47.552873 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 12:49:48.210758 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:48.210860 | orchestrator | 2025-06-02 12:49:48.210875 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 12:49:48.817582 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:48.817684 | orchestrator | 2025-06-02 12:49:48.817699 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 12:49:48.860434 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:49:48.860549 | orchestrator | 2025-06-02 12:49:48.860564 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 12:49:48.913141 | orchestrator | ok: [testbed-manager] 2025-06-02 12:49:48.913217 | orchestrator | 2025-06-02 12:49:48.913230 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 12:49:49.702732 | orchestrator | changed: [testbed-manager] 2025-06-02 12:49:49.702835 | orchestrator | 2025-06-02 12:49:49.702852 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 12:50:52.952618 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:52.952763 | orchestrator | 2025-06-02 12:50:52.952790 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 12:50:53.932948 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:53.933059 | orchestrator | 2025-06-02 12:50:53.933075 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 12:50:53.993199 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:53.993300 | orchestrator | 2025-06-02 12:50:53.993314 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 12:50:56.669551 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:56.669656 | orchestrator | 2025-06-02 12:50:56.669672 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 12:50:56.728168 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:56.728208 | orchestrator | 2025-06-02 12:50:56.728222 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 12:50:56.728234 | orchestrator | 2025-06-02 12:50:56.728246 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 12:50:56.771863 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:56.771909 | orchestrator | 2025-06-02 12:50:56.771922 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 12:51:56.828951 | orchestrator | Pausing for 60 seconds 2025-06-02 12:51:56.829039 | orchestrator | changed: [testbed-manager] 2025-06-02 12:51:56.829052 | orchestrator | 2025-06-02 12:51:56.829065 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 12:52:01.561787 | orchestrator | changed: [testbed-manager] 2025-06-02 12:52:01.561899 | orchestrator | 2025-06-02 12:52:01.561916 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 12:52:43.150311 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 12:52:43.150481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 12:52:43.150499 | orchestrator | changed: [testbed-manager] 2025-06-02 12:52:43.150513 | orchestrator | 2025-06-02 12:52:43.150525 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 12:52:51.576045 | orchestrator | changed: [testbed-manager] 2025-06-02 12:52:51.576169 | orchestrator | 2025-06-02 12:52:51.576189 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 12:52:51.671234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 12:52:51.671461 | orchestrator | 2025-06-02 12:52:51.671481 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 12:52:51.671494 | orchestrator | 2025-06-02 12:52:51.671505 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 12:52:51.734285 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:52:51.734428 | orchestrator | 2025-06-02 12:52:51.734444 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:52:51.734456 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 12:52:51.734467 | orchestrator | 2025-06-02 12:52:51.853473 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:52:51.853583 | orchestrator | + deactivate 2025-06-02 12:52:51.853606 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 12:52:51.853627 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:52:51.853644 | orchestrator | + export PATH 2025-06-02 12:52:51.853663 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 12:52:51.853683 | orchestrator | + '[' -n '' ']' 2025-06-02 12:52:51.853702 | orchestrator | + hash -r 2025-06-02 12:52:51.853721 | orchestrator | + '[' -n '' ']' 2025-06-02 12:52:51.853737 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 12:52:51.853748 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 12:52:51.853759 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 12:52:51.853769 | orchestrator | + unset -f deactivate 2025-06-02 12:52:51.853781 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 12:52:51.858672 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 12:52:51.858709 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 12:52:51.858721 | orchestrator | + local max_attempts=60 2025-06-02 12:52:51.858732 | orchestrator | + local name=ceph-ansible 2025-06-02 12:52:51.858743 | orchestrator | + local attempt_num=1 2025-06-02 12:52:51.859663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 12:52:51.897034 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:52:51.897133 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 12:52:51.897149 | orchestrator | + local max_attempts=60 2025-06-02 12:52:51.897168 | orchestrator | + local name=kolla-ansible 2025-06-02 12:52:51.897187 | orchestrator | + local attempt_num=1 2025-06-02 12:52:51.897738 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 12:52:51.932973 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:52:51.933065 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 12:52:51.933081 | orchestrator | + local max_attempts=60 2025-06-02 12:52:51.933093 | orchestrator | + local name=osism-ansible 2025-06-02 12:52:51.933105 | orchestrator | + local attempt_num=1 2025-06-02 12:52:51.933316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 12:52:51.967448 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:52:51.967503 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 12:52:51.967516 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 12:52:52.709852 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 12:52:52.895128 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 12:52:52.895229 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895245 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895258 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 12:52:52.895271 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 12:52:52.895305 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895370 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895383 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-02 12:52:52.895394 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895405 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 12:52:52.895416 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895427 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 12:52:52.895438 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895449 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895459 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.895470 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 12:52:52.902452 | orchestrator | ++ semver latest 7.0.0 2025-06-02 12:52:52.958783 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 12:52:52.958864 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 12:52:52.958880 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 12:52:52.963663 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 12:52:54.727588 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:52:54.727693 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:52:54.727708 | orchestrator | Registering Redlock._release_script 2025-06-02 12:52:54.914885 | orchestrator | 2025-06-02 12:52:54 | INFO  | Task 0cffa19b-4706-4199-8a8c-b09fc9f9e953 (resolvconf) was prepared for execution. 2025-06-02 12:52:54.914984 | orchestrator | 2025-06-02 12:52:54 | INFO  | It takes a moment until task 0cffa19b-4706-4199-8a8c-b09fc9f9e953 (resolvconf) has been started and output is visible here. 2025-06-02 12:52:58.860249 | orchestrator | 2025-06-02 12:52:58.860437 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 12:52:58.860712 | orchestrator | 2025-06-02 12:52:58.861303 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:52:58.862786 | orchestrator | Monday 02 June 2025 12:52:58 +0000 (0:00:00.145) 0:00:00.145 *********** 2025-06-02 12:53:02.740568 | orchestrator | ok: [testbed-manager] 2025-06-02 12:53:02.740684 | orchestrator | 2025-06-02 12:53:02.740706 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 12:53:02.741746 | orchestrator | Monday 02 June 2025 12:53:02 +0000 (0:00:03.881) 0:00:04.027 *********** 2025-06-02 12:53:02.800072 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:02.801059 | orchestrator | 2025-06-02 12:53:02.801668 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 12:53:02.802406 | orchestrator | Monday 02 June 2025 12:53:02 +0000 (0:00:00.061) 0:00:04.089 *********** 2025-06-02 12:53:02.885586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 12:53:02.885670 | orchestrator | 2025-06-02 12:53:02.887698 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 12:53:02.888133 | orchestrator | Monday 02 June 2025 12:53:02 +0000 (0:00:00.085) 0:00:04.174 *********** 2025-06-02 12:53:02.970778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:53:02.970878 | orchestrator | 2025-06-02 12:53:02.971806 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 12:53:02.972955 | orchestrator | Monday 02 June 2025 12:53:02 +0000 (0:00:00.085) 0:00:04.260 *********** 2025-06-02 12:53:04.097712 | orchestrator | ok: [testbed-manager] 2025-06-02 12:53:04.097817 | orchestrator | 2025-06-02 12:53:04.098288 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 12:53:04.099039 | orchestrator | Monday 02 June 2025 12:53:04 +0000 (0:00:01.125) 0:00:05.385 *********** 2025-06-02 12:53:04.169084 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:04.170491 | orchestrator | 2025-06-02 12:53:04.171139 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 12:53:04.172163 | orchestrator | Monday 02 June 2025 12:53:04 +0000 (0:00:00.072) 0:00:05.457 *********** 2025-06-02 12:53:04.700177 | orchestrator | ok: [testbed-manager] 2025-06-02 12:53:04.700280 | orchestrator | 2025-06-02 12:53:04.701557 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 12:53:04.702305 | orchestrator | Monday 02 June 2025 12:53:04 +0000 (0:00:00.530) 0:00:05.988 *********** 2025-06-02 12:53:04.781470 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:04.781578 | orchestrator | 2025-06-02 12:53:04.782450 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 12:53:04.782921 | orchestrator | Monday 02 June 2025 12:53:04 +0000 (0:00:00.081) 0:00:06.069 *********** 2025-06-02 12:53:05.321812 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:05.322214 | orchestrator | 2025-06-02 12:53:05.323561 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 12:53:05.325067 | orchestrator | Monday 02 June 2025 12:53:05 +0000 (0:00:00.540) 0:00:06.610 *********** 2025-06-02 12:53:06.416791 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:06.417706 | orchestrator | 2025-06-02 12:53:06.418593 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 12:53:06.419534 | orchestrator | Monday 02 June 2025 12:53:06 +0000 (0:00:01.094) 0:00:07.705 *********** 2025-06-02 12:53:07.417636 | orchestrator | ok: [testbed-manager] 2025-06-02 12:53:07.417740 | orchestrator | 2025-06-02 12:53:07.418706 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 12:53:07.419525 | orchestrator | Monday 02 June 2025 12:53:07 +0000 (0:00:01.001) 0:00:08.706 *********** 2025-06-02 12:53:07.482181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 12:53:07.482737 | orchestrator | 2025-06-02 12:53:07.483501 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 12:53:07.484405 | orchestrator | Monday 02 June 2025 12:53:07 +0000 (0:00:00.065) 0:00:08.772 *********** 2025-06-02 12:53:08.643864 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:08.645019 | orchestrator | 2025-06-02 12:53:08.645714 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:53:08.646340 | orchestrator | 2025-06-02 12:53:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:53:08.646753 | orchestrator | 2025-06-02 12:53:08 | INFO  | Please wait and do not abort execution. 2025-06-02 12:53:08.664600 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 12:53:08.664653 | orchestrator | 2025-06-02 12:53:08.664664 | orchestrator | 2025-06-02 12:53:08.664677 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:53:08.664689 | orchestrator | Monday 02 June 2025 12:53:08 +0000 (0:00:01.159) 0:00:09.931 *********** 2025-06-02 12:53:08.664700 | orchestrator | =============================================================================== 2025-06-02 12:53:08.664712 | orchestrator | Gathering Facts --------------------------------------------------------- 3.88s 2025-06-02 12:53:08.664723 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-06-02 12:53:08.664734 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-06-02 12:53:08.664745 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-06-02 12:53:08.664756 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2025-06-02 12:53:08.664767 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-06-02 12:53:08.664779 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2025-06-02 12:53:08.664790 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-06-02 12:53:08.664801 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-02 12:53:08.664813 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 12:53:08.664824 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-02 12:53:08.664835 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-06-02 12:53:08.664846 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-02 12:53:09.101108 | orchestrator | + osism apply sshconfig 2025-06-02 12:53:10.750208 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:53:10.750390 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:53:10.750408 | orchestrator | Registering Redlock._release_script 2025-06-02 12:53:10.805385 | orchestrator | 2025-06-02 12:53:10 | INFO  | Task 38456801-e313-4459-8f73-5d6d32b5293f (sshconfig) was prepared for execution. 2025-06-02 12:53:10.805477 | orchestrator | 2025-06-02 12:53:10 | INFO  | It takes a moment until task 38456801-e313-4459-8f73-5d6d32b5293f (sshconfig) has been started and output is visible here. 2025-06-02 12:53:14.821650 | orchestrator | 2025-06-02 12:53:14.821763 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 12:53:14.823374 | orchestrator | 2025-06-02 12:53:14.823403 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 12:53:14.823639 | orchestrator | Monday 02 June 2025 12:53:14 +0000 (0:00:00.173) 0:00:00.173 *********** 2025-06-02 12:53:15.367301 | orchestrator | ok: [testbed-manager] 2025-06-02 12:53:15.367441 | orchestrator | 2025-06-02 12:53:15.367613 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 12:53:15.368347 | orchestrator | Monday 02 June 2025 12:53:15 +0000 (0:00:00.549) 0:00:00.723 *********** 2025-06-02 12:53:15.828871 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:15.829019 | orchestrator | 2025-06-02 12:53:15.830276 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 12:53:15.831145 | orchestrator | Monday 02 June 2025 12:53:15 +0000 (0:00:00.461) 0:00:01.184 *********** 2025-06-02 12:53:21.063877 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:53:21.063990 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:53:21.064240 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:53:21.064473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:53:21.065157 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:53:21.065424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:53:21.066218 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:53:21.066266 | orchestrator | 2025-06-02 12:53:21.066660 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 12:53:21.067055 | orchestrator | Monday 02 June 2025 12:53:21 +0000 (0:00:05.231) 0:00:06.416 *********** 2025-06-02 12:53:21.126700 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:21.127608 | orchestrator | 2025-06-02 12:53:21.128877 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 12:53:21.129692 | orchestrator | Monday 02 June 2025 12:53:21 +0000 (0:00:00.065) 0:00:06.481 *********** 2025-06-02 12:53:21.648163 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:21.649345 | orchestrator | 2025-06-02 12:53:21.650859 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:53:21.650935 | orchestrator | 2025-06-02 12:53:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:53:21.650951 | orchestrator | 2025-06-02 12:53:21 | INFO  | Please wait and do not abort execution. 2025-06-02 12:53:21.651720 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 12:53:21.652874 | orchestrator | 2025-06-02 12:53:21.653695 | orchestrator | 2025-06-02 12:53:21.654463 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:53:21.655413 | orchestrator | Monday 02 June 2025 12:53:21 +0000 (0:00:00.523) 0:00:07.004 *********** 2025-06-02 12:53:21.655815 | orchestrator | =============================================================================== 2025-06-02 12:53:21.656346 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.23s 2025-06-02 12:53:21.656741 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-06-02 12:53:21.657485 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.52s 2025-06-02 12:53:21.658372 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2025-06-02 12:53:21.658582 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-02 12:53:21.975460 | orchestrator | + osism apply known-hosts 2025-06-02 12:53:23.435168 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:53:23.435269 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:53:23.435292 | orchestrator | Registering Redlock._release_script 2025-06-02 12:53:23.487865 | orchestrator | 2025-06-02 12:53:23 | INFO  | Task 4775d033-e202-40c2-947c-e569cdc5f6a6 (known-hosts) was prepared for execution. 2025-06-02 12:53:23.487956 | orchestrator | 2025-06-02 12:53:23 | INFO  | It takes a moment until task 4775d033-e202-40c2-947c-e569cdc5f6a6 (known-hosts) has been started and output is visible here. 2025-06-02 12:53:27.114628 | orchestrator | 2025-06-02 12:53:27.114723 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 12:53:27.114960 | orchestrator | 2025-06-02 12:53:27.115022 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 12:53:27.115562 | orchestrator | Monday 02 June 2025 12:53:27 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-02 12:53:32.885344 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:53:32.885476 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:53:32.886426 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:53:32.887550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:53:32.888658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:53:32.889836 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:53:32.891889 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:53:32.892426 | orchestrator | 2025-06-02 12:53:32.893189 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 12:53:32.893795 | orchestrator | Monday 02 June 2025 12:53:32 +0000 (0:00:05.773) 0:00:05.933 *********** 2025-06-02 12:53:33.074676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 12:53:33.075400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 12:53:33.076600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 12:53:33.077422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 12:53:33.078130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 12:53:33.078565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 12:53:33.079213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 12:53:33.079349 | orchestrator | 2025-06-02 12:53:33.080163 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:33.080278 | orchestrator | Monday 02 June 2025 12:53:33 +0000 (0:00:00.189) 0:00:06.123 *********** 2025-06-02 12:53:34.274127 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgGNP3+jSvWz1o88yutliMFhutVFEs/dEiMfdwbi/Kp) 2025-06-02 12:53:34.274985 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjS72H7e7tN/cSP0daMW7eOz+xwRaqHw7PeQ56293V3IBW7oCXlGeJ8xH6huwFK/240pJFXEuFLtE5y+HXIF0Z/sY4wrvjQdVNfybUZwLpyrRLM56O1pDWZIzkh1IHObLKOU4AzrRkT952LLqG/1Mmbs0HATiPbz37ScGW9ioV//woXjIJRa28TihZdnZ0hF+5HKwWRcG1hkA01AghVdIzQchFEgRjpE1MRtaZdb8is6UXMPELe1Dp0C0JUp96vATsFVouWKdi/5yErYsBKsmazYS9i7OyGj336QczRzlC2xTcRyvuaJB9H+Ml9rJzPpglrNWCjmfeehpDilxXfirZ3NYQ47tuTKaKLPGBT0sESTRP12FLe1jF2IYaA4TsebkNeAVV06omy4uB96DHy5i6nE4f/GgJ8qoEHAJTXkNDAVM2B7Bbt3JUPHli9DZ3usSxLQzO8DoMSMS5WvkXoJuV0WUoWE1euDl79t/d6KlqJwVs5zBRHO9ahn/njB5cdTc=) 2025-06-02 12:53:34.275417 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOIgMHeStvSm54RprCWMZ5GUQHAIEpqdUS9QZUqvIV7ZGYXachydZ3joXyInj9r+spFJmXIMJzyuHY08R4I8AW4=) 2025-06-02 12:53:34.276052 | orchestrator | 2025-06-02 12:53:34.276736 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:34.277286 | orchestrator | Monday 02 June 2025 12:53:34 +0000 (0:00:01.196) 0:00:07.320 *********** 2025-06-02 12:53:35.343496 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtnHxicntxChYtoSq31ntO3d8MHL9zqsqU4P4uixJ85habKvHmJNITUDas+GvWJAURF+VeF5gjCBCnRQSuK5VFl1vI3Br4eBLL8zQn3PdlfcguWJhVOQbE67QExnxQoK9h3DYnkhciKNm2jBKiJRIuOBP+jOYzFO/XFVbUEprC/ZXy2K0NtW4YJ8cuOPlCOfmKQFgEd6xtCKhL+zfwkthOTCbIVKuCe9yyuWQsb/g33TfYWqpmK6IswILNfygobLO3y4mCBEorvWgV5R6kB3mcZgaz7GddaNqV3ub7SqDtzaEy9Ub2lvsE369pePyd1zTzNucrmhstbLXmqWl3YqNmx1YwUUcCZDzWY3/eh4dRFk/XkJbtARwLOrKXf35aVReOYirSVuBe37XoWxNzTz66cRZNy8FPe7UWq+ya/lOlwXi8OT/mIqZ7uQRUYDRSypMWiu6nfElzDNexNM/pL5yzq6+8EWj6tJ92C+VqZCzcidv5saD5kpcwZM8w+ZIUzU8=) 2025-06-02 12:53:35.343892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJjJUTP6F4H5eCwJOVYcNi38pR+HcA1bDkJtJQ7+VDEd84NQRn7KGqLSv+6smaE73ZYnrgbSjXbs3nqndQ+6ZDU=) 2025-06-02 12:53:35.346408 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOeDjjtJaaOfgAD50BINjR2GjnR1Qoq1Rb/CDv6X9137) 2025-06-02 12:53:35.346493 | orchestrator | 2025-06-02 12:53:35.347057 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:35.347323 | orchestrator | Monday 02 June 2025 12:53:35 +0000 (0:00:01.071) 0:00:08.391 *********** 2025-06-02 12:53:36.422942 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBcHjljkwF1eq2MbkVXLInbbBT3z7Sg9PHoyDYu6lXoRGcvD73D0CJopmo4pDYUOPF6gf7RHC1YhX6BaCUQddtiNTJnBg5vPRhDYXGovH4PRD+daWCS13WjWziv9NG8t7uqBv9l+u9ulAUc7PI1RE8rwaaLUgxfU0Tn2AeOGGIRiMq68nG6pEhpWzShL0CI8YVYCSRiRkjw2RyGe3NddVs9Z8INJZZEKoW8uTh6AcHGnSTgmnl84GV4MS3AO8/N8MgtcXiR0Tc2pdP2soQehc34z5c1SElLqOql7t4ux596achgdp+8sLSrzxQp9/VH9DCYay1QVZSybeiC3LiQwdjCfuqL2qhUw/cyoxF0GI9D5U17vGPzSKihuG8OgJa/g3BqwlFjd8lCBSUUXituBzlu6pgSqXdQig/5fW7Gp5UdtuAeNwViovFQscruNj7y5uM/OCu2XXhEqls4M3saEyHrSXGLu+bEAF3Gxak539X9tLkEzk8sgnsyYYW0nbP2U=) 2025-06-02 12:53:36.423284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVFSyD7ONwa5xGA77Ic2sV/8PiQ0+vjPYwSaK4ykvKw6jDcOBp9d8B0hGofZXBBY1NhzD8bxtIK7ShYl3Ya6x0=) 2025-06-02 12:53:36.424236 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8Zo7GVKL2VjGZ1Wlk/5Hocc5miqIsR7IL9WDwXtepc) 2025-06-02 12:53:36.424968 | orchestrator | 2025-06-02 12:53:36.425385 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:36.425837 | orchestrator | Monday 02 June 2025 12:53:36 +0000 (0:00:01.079) 0:00:09.470 *********** 2025-06-02 12:53:37.487919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFfUSjEnAMN9h+c78d2zOw5GxO3WEHjbLEmZQ9oG582q) 2025-06-02 12:53:37.488508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC84vSyVP2YT5Bk9sv+He6US/wJdF7UUDEhpfyf7wJpZkV1xry3tOg3HqmEnbEfl3b+J7z810iAztFL3Z/awCSbM5Sq8wO6KoFug4EuYas0nNUqr4isPB2DINUtNhF8lJpVVtCAAdd3gKAAefnWufF5LpkXYPnnkdXDFJLETOTvkLqxe7xwbpYWMfJ8KS4Q0SMTjcEpo+OkXXn4g8jshV6nito+vwbif8JBUPFaKF+G2kawFZdifTqsetgXC36L1YUqmIVciLlWJcr8I0ii14S+4CT+dJbuGOQRPA/K/A0rLFAqmaUzrbnXq1a+QCCNslswt0b3oT+tdAgB3T7/6SVacRYcjGmE4BUPc7re0UvBDI0ypxFh+R5UoJ0KEf/9/A06ZWuZmRttBzu55zAN2Yqpo7cudUsXBahktfXxVpEOuYEqNyJTO6wWf1QemWELVogCEz0vtzkrjaDO8HuPmch6KTxUf5H+0UbeJoAiRKpO+Mn1O2BC4Ya6a5N86qowXzs=) 2025-06-02 12:53:37.489218 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGD01ZqYzO+j42Zmy2YIA1WDe5P4JlmgAj4+vK1FHOA0AFWH8LHb+nH6J+jmszbGybUWnL6aNu/WpOzeXuyeK3E=) 2025-06-02 12:53:37.490081 | orchestrator | 2025-06-02 12:53:37.490825 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:37.491314 | orchestrator | Monday 02 June 2025 12:53:37 +0000 (0:00:01.065) 0:00:10.536 *********** 2025-06-02 12:53:38.570802 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTMU3xsfveMzFs/nYoYNDFLZVXG7ywPRnpl5KXhRFtHAIxHDPzkbsEyCRI4yGNA1cQjzFazvEdR0sQZjebcfG8XhNgg666bRL+j3EW1pfhI14SBOcbc2r/FwA9Nc0OYq+4uFQCUzSQVQFGnaH2LkQbOY3Z7rYAhcNT4DLOcfxBuLahGPs4UOIxrcTkbq80hnoHcGUiRT6c/42SNbHNvBPhRZbpk+4QBhvraUc4J2M2QYT4GuKXMqfTG8WPSPMYWh7L6GCQv6QX00BIq91bBFiPo8Z2NpfozYjqMDu0mpg9NdX4fHiYI7Yd01QAqGtyg1FTRoaEkjhD9kwdkdaYrAjgylaklLRIbqHMwMj7Bh4g2fob5uWkREctS2gtvQz7vaSiuv9pk7EJOO7SVD49snY4HvV++zL8seyDZASdw87C+0vgXuGq2ziM9+T3slJS+SDXKc2aYzo1DEzjyYGrrfZCc9I3EwCycbSpyWLhfU1UEP/zFoYPQ0MM0r6NqXdn2JM=) 2025-06-02 12:53:38.571462 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLi0ekNy22ilVTbN/ILZtN7VaUKALeMQ0FBgfIf3g0Cx1xX07qga5lNvOysdB+c6mNqNbBwO1unl1cT7iy8934=) 2025-06-02 12:53:38.571795 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEm8q2wah/GmrmUf8lbeEjnRvAEdLgZRbcHIpqketHGf) 2025-06-02 12:53:38.572461 | orchestrator | 2025-06-02 12:53:38.573376 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:38.573966 | orchestrator | Monday 02 June 2025 12:53:38 +0000 (0:00:01.082) 0:00:11.618 *********** 2025-06-02 12:53:39.613185 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg1ExGK427+4mpycDCMzJxRiOGjdJWhjbPWNaJz/GC9YQToPBbcn4jyIhoEmKWXagNbztKV60TRFmCbpHtuAgnqQubjWmIVDfg+aSPhZmXUI7ydEs1HuFGJiLB0JLqNqa0FJRJITRC70Se6zr60uEywezYT5KVgy5JkF7MiV2SiY1o3nJMDAmlxg8DFNw90r25bR3QI+KqStAvnpGYqI9Y0lrk9KU5a142L1hsvzTtXyFJVme15qPjUL4KbG2OdMEIp3NIk+u74557yTAHtLu5/zN7fomGLcemzB930FfnMbk7HdrbuaXnEXIOfsSTJ8ykgR0LuOGcoJvrz4J47J9S+PZYC4uTW8wacKLWSbxOSYLZiNgskml8/2AnSywjQqv94gah/Tfs0Vcglb21jKoJu3D4k6cXGnHNWiq5SdO6F4Zo2d1Vcg2bk778g//dORzLu6gbm8dBXrVwZ9hKERL8mPcODWl3JgsOc5a5HyK/CWid4s1k3+D/zJmisWisK38=) 2025-06-02 12:53:39.613384 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLv5j1bZwzvcgupt5yI6wUFgQYe4hdcOynbQGJm5UQWOr5ia8Sj+F8BdNRKqq6/+3lRK+vPJaSDsYdm7xVPZR+w=) 2025-06-02 12:53:39.613820 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdfD7tOyDv4agccFjGo06VirvIdd9Px2tHEyf8yfOVt) 2025-06-02 12:53:39.614837 | orchestrator | 2025-06-02 12:53:39.615720 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:39.616418 | orchestrator | Monday 02 June 2025 12:53:39 +0000 (0:00:01.039) 0:00:12.658 *********** 2025-06-02 12:53:40.679537 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+hCg2SHJH045Hv4Nmt1SQV9sR3FunfL/4V8mbMkdPxky5tG5//5dgwM3UmOIZVcS/tp7CZz8ZlGWlo3DzW8fvqNsfMmfD0MYXohICCEPkp3MsOgRmqtnq/fViQe9DGHrr5tVt46wjm6UB5fc48bIEOsfnXXbvyGIgO33kUeH6FBoLWEqP0FuaO8w49kL38CVK3KO6YaoPFGCcAs2lw14mG+Seux4LPb9l+ybnx8RNCgUiTxhAgTdDvRYo9zF0YPqTFuSkp9q224YReQWclraYTIgF+w78Y2d7NhLsg1di+hZDIAZorgvWophmKFzspeDQfzK3W9TQru0k8HMVbG+Wd2qMw+sgVrMFP9GLnuyMjiOQFMNV7U6G8z6XNYDDSu1MlMBjk63Mpm+yS8S4qsJv69xV5jD1DD3eM6HvhmkfcFon9lXqt6d16qUpVDagGzgVD8s6kFiwh8QRTLxMAzasudDqfUbyyxox/wmfbRx949Pz3ZFovDtmEyJWLl3TyfU=) 2025-06-02 12:53:40.680905 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHHXLyGAtB/qdFwRWd5OXXaSdPEORACNV3CKkHSlcDb79eubfyypT/xlD9wcOEmi7kWJDn9Qkabk/VcJu0Z8Rj0=) 2025-06-02 12:53:40.681361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwl0q49zAsn+3Gj8rwjFXDNylrJnDG8e8C0cevSjjSd) 2025-06-02 12:53:40.682146 | orchestrator | 2025-06-02 12:53:40.682757 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 12:53:40.683099 | orchestrator | Monday 02 June 2025 12:53:40 +0000 (0:00:01.069) 0:00:13.727 *********** 2025-06-02 12:53:46.046540 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:53:46.046675 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:53:46.046696 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:53:46.046716 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:53:46.048541 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:53:46.051231 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:53:46.051348 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:53:46.052103 | orchestrator | 2025-06-02 12:53:46.052473 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 12:53:46.053004 | orchestrator | Monday 02 June 2025 12:53:46 +0000 (0:00:05.362) 0:00:19.090 *********** 2025-06-02 12:53:46.203695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 12:53:46.204245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 12:53:46.205709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 12:53:46.206859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 12:53:46.207840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 12:53:46.208621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 12:53:46.210198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 12:53:46.210249 | orchestrator | 2025-06-02 12:53:46.210730 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:46.211839 | orchestrator | Monday 02 June 2025 12:53:46 +0000 (0:00:00.161) 0:00:19.252 *********** 2025-06-02 12:53:47.269704 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgGNP3+jSvWz1o88yutliMFhutVFEs/dEiMfdwbi/Kp) 2025-06-02 12:53:47.270002 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjS72H7e7tN/cSP0daMW7eOz+xwRaqHw7PeQ56293V3IBW7oCXlGeJ8xH6huwFK/240pJFXEuFLtE5y+HXIF0Z/sY4wrvjQdVNfybUZwLpyrRLM56O1pDWZIzkh1IHObLKOU4AzrRkT952LLqG/1Mmbs0HATiPbz37ScGW9ioV//woXjIJRa28TihZdnZ0hF+5HKwWRcG1hkA01AghVdIzQchFEgRjpE1MRtaZdb8is6UXMPELe1Dp0C0JUp96vATsFVouWKdi/5yErYsBKsmazYS9i7OyGj336QczRzlC2xTcRyvuaJB9H+Ml9rJzPpglrNWCjmfeehpDilxXfirZ3NYQ47tuTKaKLPGBT0sESTRP12FLe1jF2IYaA4TsebkNeAVV06omy4uB96DHy5i6nE4f/GgJ8qoEHAJTXkNDAVM2B7Bbt3JUPHli9DZ3usSxLQzO8DoMSMS5WvkXoJuV0WUoWE1euDl79t/d6KlqJwVs5zBRHO9ahn/njB5cdTc=) 2025-06-02 12:53:47.270173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOIgMHeStvSm54RprCWMZ5GUQHAIEpqdUS9QZUqvIV7ZGYXachydZ3joXyInj9r+spFJmXIMJzyuHY08R4I8AW4=) 2025-06-02 12:53:47.271845 | orchestrator | 2025-06-02 12:53:47.271865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:47.271872 | orchestrator | Monday 02 June 2025 12:53:47 +0000 (0:00:01.066) 0:00:20.318 *********** 2025-06-02 12:53:48.339784 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJjJUTP6F4H5eCwJOVYcNi38pR+HcA1bDkJtJQ7+VDEd84NQRn7KGqLSv+6smaE73ZYnrgbSjXbs3nqndQ+6ZDU=) 2025-06-02 12:53:48.340046 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtnHxicntxChYtoSq31ntO3d8MHL9zqsqU4P4uixJ85habKvHmJNITUDas+GvWJAURF+VeF5gjCBCnRQSuK5VFl1vI3Br4eBLL8zQn3PdlfcguWJhVOQbE67QExnxQoK9h3DYnkhciKNm2jBKiJRIuOBP+jOYzFO/XFVbUEprC/ZXy2K0NtW4YJ8cuOPlCOfmKQFgEd6xtCKhL+zfwkthOTCbIVKuCe9yyuWQsb/g33TfYWqpmK6IswILNfygobLO3y4mCBEorvWgV5R6kB3mcZgaz7GddaNqV3ub7SqDtzaEy9Ub2lvsE369pePyd1zTzNucrmhstbLXmqWl3YqNmx1YwUUcCZDzWY3/eh4dRFk/XkJbtARwLOrKXf35aVReOYirSVuBe37XoWxNzTz66cRZNy8FPe7UWq+ya/lOlwXi8OT/mIqZ7uQRUYDRSypMWiu6nfElzDNexNM/pL5yzq6+8EWj6tJ92C+VqZCzcidv5saD5kpcwZM8w+ZIUzU8=) 2025-06-02 12:53:48.341273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOeDjjtJaaOfgAD50BINjR2GjnR1Qoq1Rb/CDv6X9137) 2025-06-02 12:53:48.342399 | orchestrator | 2025-06-02 12:53:48.342891 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:48.344790 | orchestrator | Monday 02 June 2025 12:53:48 +0000 (0:00:01.069) 0:00:21.387 *********** 2025-06-02 12:53:49.386533 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8Zo7GVKL2VjGZ1Wlk/5Hocc5miqIsR7IL9WDwXtepc) 2025-06-02 12:53:49.387989 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBcHjljkwF1eq2MbkVXLInbbBT3z7Sg9PHoyDYu6lXoRGcvD73D0CJopmo4pDYUOPF6gf7RHC1YhX6BaCUQddtiNTJnBg5vPRhDYXGovH4PRD+daWCS13WjWziv9NG8t7uqBv9l+u9ulAUc7PI1RE8rwaaLUgxfU0Tn2AeOGGIRiMq68nG6pEhpWzShL0CI8YVYCSRiRkjw2RyGe3NddVs9Z8INJZZEKoW8uTh6AcHGnSTgmnl84GV4MS3AO8/N8MgtcXiR0Tc2pdP2soQehc34z5c1SElLqOql7t4ux596achgdp+8sLSrzxQp9/VH9DCYay1QVZSybeiC3LiQwdjCfuqL2qhUw/cyoxF0GI9D5U17vGPzSKihuG8OgJa/g3BqwlFjd8lCBSUUXituBzlu6pgSqXdQig/5fW7Gp5UdtuAeNwViovFQscruNj7y5uM/OCu2XXhEqls4M3saEyHrSXGLu+bEAF3Gxak539X9tLkEzk8sgnsyYYW0nbP2U=) 2025-06-02 12:53:49.388848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHVFSyD7ONwa5xGA77Ic2sV/8PiQ0+vjPYwSaK4ykvKw6jDcOBp9d8B0hGofZXBBY1NhzD8bxtIK7ShYl3Ya6x0=) 2025-06-02 12:53:49.388910 | orchestrator | 2025-06-02 12:53:49.390216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:49.391583 | orchestrator | Monday 02 June 2025 12:53:49 +0000 (0:00:01.046) 0:00:22.434 *********** 2025-06-02 12:53:50.436148 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC84vSyVP2YT5Bk9sv+He6US/wJdF7UUDEhpfyf7wJpZkV1xry3tOg3HqmEnbEfl3b+J7z810iAztFL3Z/awCSbM5Sq8wO6KoFug4EuYas0nNUqr4isPB2DINUtNhF8lJpVVtCAAdd3gKAAefnWufF5LpkXYPnnkdXDFJLETOTvkLqxe7xwbpYWMfJ8KS4Q0SMTjcEpo+OkXXn4g8jshV6nito+vwbif8JBUPFaKF+G2kawFZdifTqsetgXC36L1YUqmIVciLlWJcr8I0ii14S+4CT+dJbuGOQRPA/K/A0rLFAqmaUzrbnXq1a+QCCNslswt0b3oT+tdAgB3T7/6SVacRYcjGmE4BUPc7re0UvBDI0ypxFh+R5UoJ0KEf/9/A06ZWuZmRttBzu55zAN2Yqpo7cudUsXBahktfXxVpEOuYEqNyJTO6wWf1QemWELVogCEz0vtzkrjaDO8HuPmch6KTxUf5H+0UbeJoAiRKpO+Mn1O2BC4Ya6a5N86qowXzs=) 2025-06-02 12:53:50.436822 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGD01ZqYzO+j42Zmy2YIA1WDe5P4JlmgAj4+vK1FHOA0AFWH8LHb+nH6J+jmszbGybUWnL6aNu/WpOzeXuyeK3E=) 2025-06-02 12:53:50.437587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFfUSjEnAMN9h+c78d2zOw5GxO3WEHjbLEmZQ9oG582q) 2025-06-02 12:53:50.438310 | orchestrator | 2025-06-02 12:53:50.438968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:50.439490 | orchestrator | Monday 02 June 2025 12:53:50 +0000 (0:00:01.048) 0:00:23.482 *********** 2025-06-02 12:53:51.496137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTMU3xsfveMzFs/nYoYNDFLZVXG7ywPRnpl5KXhRFtHAIxHDPzkbsEyCRI4yGNA1cQjzFazvEdR0sQZjebcfG8XhNgg666bRL+j3EW1pfhI14SBOcbc2r/FwA9Nc0OYq+4uFQCUzSQVQFGnaH2LkQbOY3Z7rYAhcNT4DLOcfxBuLahGPs4UOIxrcTkbq80hnoHcGUiRT6c/42SNbHNvBPhRZbpk+4QBhvraUc4J2M2QYT4GuKXMqfTG8WPSPMYWh7L6GCQv6QX00BIq91bBFiPo8Z2NpfozYjqMDu0mpg9NdX4fHiYI7Yd01QAqGtyg1FTRoaEkjhD9kwdkdaYrAjgylaklLRIbqHMwMj7Bh4g2fob5uWkREctS2gtvQz7vaSiuv9pk7EJOO7SVD49snY4HvV++zL8seyDZASdw87C+0vgXuGq2ziM9+T3slJS+SDXKc2aYzo1DEzjyYGrrfZCc9I3EwCycbSpyWLhfU1UEP/zFoYPQ0MM0r6NqXdn2JM=) 2025-06-02 12:53:51.496253 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNLi0ekNy22ilVTbN/ILZtN7VaUKALeMQ0FBgfIf3g0Cx1xX07qga5lNvOysdB+c6mNqNbBwO1unl1cT7iy8934=) 2025-06-02 12:53:51.497091 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEm8q2wah/GmrmUf8lbeEjnRvAEdLgZRbcHIpqketHGf) 2025-06-02 12:53:51.497865 | orchestrator | 2025-06-02 12:53:51.498628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:51.499126 | orchestrator | Monday 02 June 2025 12:53:51 +0000 (0:00:01.058) 0:00:24.540 *********** 2025-06-02 12:53:52.556153 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg1ExGK427+4mpycDCMzJxRiOGjdJWhjbPWNaJz/GC9YQToPBbcn4jyIhoEmKWXagNbztKV60TRFmCbpHtuAgnqQubjWmIVDfg+aSPhZmXUI7ydEs1HuFGJiLB0JLqNqa0FJRJITRC70Se6zr60uEywezYT5KVgy5JkF7MiV2SiY1o3nJMDAmlxg8DFNw90r25bR3QI+KqStAvnpGYqI9Y0lrk9KU5a142L1hsvzTtXyFJVme15qPjUL4KbG2OdMEIp3NIk+u74557yTAHtLu5/zN7fomGLcemzB930FfnMbk7HdrbuaXnEXIOfsSTJ8ykgR0LuOGcoJvrz4J47J9S+PZYC4uTW8wacKLWSbxOSYLZiNgskml8/2AnSywjQqv94gah/Tfs0Vcglb21jKoJu3D4k6cXGnHNWiq5SdO6F4Zo2d1Vcg2bk778g//dORzLu6gbm8dBXrVwZ9hKERL8mPcODWl3JgsOc5a5HyK/CWid4s1k3+D/zJmisWisK38=) 2025-06-02 12:53:52.557440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLv5j1bZwzvcgupt5yI6wUFgQYe4hdcOynbQGJm5UQWOr5ia8Sj+F8BdNRKqq6/+3lRK+vPJaSDsYdm7xVPZR+w=) 2025-06-02 12:53:52.558485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdfD7tOyDv4agccFjGo06VirvIdd9Px2tHEyf8yfOVt) 2025-06-02 12:53:52.558532 | orchestrator | 2025-06-02 12:53:52.559768 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:53:52.560063 | orchestrator | Monday 02 June 2025 12:53:52 +0000 (0:00:01.062) 0:00:25.603 *********** 2025-06-02 12:53:53.635825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwl0q49zAsn+3Gj8rwjFXDNylrJnDG8e8C0cevSjjSd) 2025-06-02 12:53:53.636163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+hCg2SHJH045Hv4Nmt1SQV9sR3FunfL/4V8mbMkdPxky5tG5//5dgwM3UmOIZVcS/tp7CZz8ZlGWlo3DzW8fvqNsfMmfD0MYXohICCEPkp3MsOgRmqtnq/fViQe9DGHrr5tVt46wjm6UB5fc48bIEOsfnXXbvyGIgO33kUeH6FBoLWEqP0FuaO8w49kL38CVK3KO6YaoPFGCcAs2lw14mG+Seux4LPb9l+ybnx8RNCgUiTxhAgTdDvRYo9zF0YPqTFuSkp9q224YReQWclraYTIgF+w78Y2d7NhLsg1di+hZDIAZorgvWophmKFzspeDQfzK3W9TQru0k8HMVbG+Wd2qMw+sgVrMFP9GLnuyMjiOQFMNV7U6G8z6XNYDDSu1MlMBjk63Mpm+yS8S4qsJv69xV5jD1DD3eM6HvhmkfcFon9lXqt6d16qUpVDagGzgVD8s6kFiwh8QRTLxMAzasudDqfUbyyxox/wmfbRx949Pz3ZFovDtmEyJWLl3TyfU=) 2025-06-02 12:53:53.637089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHHXLyGAtB/qdFwRWd5OXXaSdPEORACNV3CKkHSlcDb79eubfyypT/xlD9wcOEmi7kWJDn9Qkabk/VcJu0Z8Rj0=) 2025-06-02 12:53:53.637757 | orchestrator | 2025-06-02 12:53:53.638981 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 12:53:53.639261 | orchestrator | Monday 02 June 2025 12:53:53 +0000 (0:00:01.078) 0:00:26.681 *********** 2025-06-02 12:53:53.798259 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 12:53:53.798938 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 12:53:53.800063 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 12:53:53.800902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 12:53:53.801941 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 12:53:53.802760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 12:53:53.803683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 12:53:53.804011 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:53.804455 | orchestrator | 2025-06-02 12:53:53.805113 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 12:53:53.805786 | orchestrator | Monday 02 June 2025 12:53:53 +0000 (0:00:00.164) 0:00:26.846 *********** 2025-06-02 12:53:53.861978 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:53.862393 | orchestrator | 2025-06-02 12:53:53.863175 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 12:53:53.863930 | orchestrator | Monday 02 June 2025 12:53:53 +0000 (0:00:00.064) 0:00:26.910 *********** 2025-06-02 12:53:53.915107 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:53.915601 | orchestrator | 2025-06-02 12:53:53.916076 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 12:53:53.917222 | orchestrator | Monday 02 June 2025 12:53:53 +0000 (0:00:00.053) 0:00:26.964 *********** 2025-06-02 12:53:54.564383 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:54.564496 | orchestrator | 2025-06-02 12:53:54.564913 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:53:54.564951 | orchestrator | 2025-06-02 12:53:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:53:54.564972 | orchestrator | 2025-06-02 12:53:54 | INFO  | Please wait and do not abort execution. 2025-06-02 12:53:54.565268 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 12:53:54.565708 | orchestrator | 2025-06-02 12:53:54.566168 | orchestrator | 2025-06-02 12:53:54.566540 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:53:54.567407 | orchestrator | Monday 02 June 2025 12:53:54 +0000 (0:00:00.647) 0:00:27.612 *********** 2025-06-02 12:53:54.567719 | orchestrator | =============================================================================== 2025-06-02 12:53:54.568091 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.77s 2025-06-02 12:53:54.568939 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.36s 2025-06-02 12:53:54.569129 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-06-02 12:53:54.570109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 12:53:54.570131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 12:53:54.570387 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 12:53:54.571393 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 12:53:54.571702 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 12:53:54.571954 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 12:53:54.572395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 12:53:54.573097 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-02 12:53:54.573119 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 12:53:54.573786 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-02 12:53:54.574186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 12:53:54.574528 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-02 12:53:54.575329 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-02 12:53:54.575532 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2025-06-02 12:53:54.576131 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-06-02 12:53:54.576507 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-02 12:53:54.576744 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-02 12:53:55.077739 | orchestrator | + osism apply squid 2025-06-02 12:53:56.746580 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:53:56.746715 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:53:56.746731 | orchestrator | Registering Redlock._release_script 2025-06-02 12:53:56.804360 | orchestrator | 2025-06-02 12:53:56 | INFO  | Task a26d8f51-15e1-4aed-a5b0-df3df80fa7de (squid) was prepared for execution. 2025-06-02 12:53:56.804461 | orchestrator | 2025-06-02 12:53:56 | INFO  | It takes a moment until task a26d8f51-15e1-4aed-a5b0-df3df80fa7de (squid) has been started and output is visible here. 2025-06-02 12:54:00.680713 | orchestrator | 2025-06-02 12:54:00.682700 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 12:54:00.682736 | orchestrator | 2025-06-02 12:54:00.683433 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 12:54:00.684302 | orchestrator | Monday 02 June 2025 12:54:00 +0000 (0:00:00.151) 0:00:00.151 *********** 2025-06-02 12:54:00.760677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:54:00.762124 | orchestrator | 2025-06-02 12:54:00.762684 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 12:54:00.763755 | orchestrator | Monday 02 June 2025 12:54:00 +0000 (0:00:00.081) 0:00:00.232 *********** 2025-06-02 12:54:01.937523 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:01.937613 | orchestrator | 2025-06-02 12:54:01.938106 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 12:54:01.939153 | orchestrator | Monday 02 June 2025 12:54:01 +0000 (0:00:01.175) 0:00:01.407 *********** 2025-06-02 12:54:03.019151 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 12:54:03.020074 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 12:54:03.021126 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 12:54:03.022083 | orchestrator | 2025-06-02 12:54:03.022973 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 12:54:03.023587 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:01.081) 0:00:02.489 *********** 2025-06-02 12:54:03.972983 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 12:54:03.973711 | orchestrator | 2025-06-02 12:54:03.974354 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 12:54:03.975550 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:00.953) 0:00:03.443 *********** 2025-06-02 12:54:04.344171 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:04.345353 | orchestrator | 2025-06-02 12:54:04.345901 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 12:54:04.346214 | orchestrator | Monday 02 June 2025 12:54:04 +0000 (0:00:00.372) 0:00:03.815 *********** 2025-06-02 12:54:05.266852 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:05.267854 | orchestrator | 2025-06-02 12:54:05.268038 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 12:54:05.268637 | orchestrator | Monday 02 June 2025 12:54:05 +0000 (0:00:00.921) 0:00:04.736 *********** 2025-06-02 12:54:37.387655 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 12:54:37.387772 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:37.387789 | orchestrator | 2025-06-02 12:54:37.387801 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 12:54:37.387920 | orchestrator | Monday 02 June 2025 12:54:37 +0000 (0:00:32.116) 0:00:36.852 *********** 2025-06-02 12:54:49.887999 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:49.888119 | orchestrator | 2025-06-02 12:54:49.888136 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 12:54:49.888149 | orchestrator | Monday 02 June 2025 12:54:49 +0000 (0:00:12.501) 0:00:49.353 *********** 2025-06-02 12:55:49.969914 | orchestrator | Pausing for 60 seconds 2025-06-02 12:55:49.970163 | orchestrator | changed: [testbed-manager] 2025-06-02 12:55:49.970221 | orchestrator | 2025-06-02 12:55:49.970231 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 12:55:49.970276 | orchestrator | Monday 02 June 2025 12:55:49 +0000 (0:01:00.079) 0:01:49.433 *********** 2025-06-02 12:55:50.039534 | orchestrator | ok: [testbed-manager] 2025-06-02 12:55:50.040275 | orchestrator | 2025-06-02 12:55:50.041566 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 12:55:50.042287 | orchestrator | Monday 02 June 2025 12:55:50 +0000 (0:00:00.076) 0:01:49.510 *********** 2025-06-02 12:55:50.623044 | orchestrator | changed: [testbed-manager] 2025-06-02 12:55:50.623495 | orchestrator | 2025-06-02 12:55:50.623960 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:55:50.624085 | orchestrator | 2025-06-02 12:55:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:55:50.624376 | orchestrator | 2025-06-02 12:55:50 | INFO  | Please wait and do not abort execution. 2025-06-02 12:55:50.625348 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:55:50.626010 | orchestrator | 2025-06-02 12:55:50.626610 | orchestrator | 2025-06-02 12:55:50.626947 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:55:50.627742 | orchestrator | Monday 02 June 2025 12:55:50 +0000 (0:00:00.582) 0:01:50.093 *********** 2025-06-02 12:55:50.628502 | orchestrator | =============================================================================== 2025-06-02 12:55:50.629310 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-02 12:55:50.629887 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.12s 2025-06-02 12:55:50.630542 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.50s 2025-06-02 12:55:50.631095 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.18s 2025-06-02 12:55:50.631772 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2025-06-02 12:55:50.632449 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2025-06-02 12:55:50.632922 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-06-02 12:55:50.633557 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2025-06-02 12:55:50.634819 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-06-02 12:55:50.635058 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-02 12:55:50.635164 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-06-02 12:55:51.110342 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 12:55:51.110941 | orchestrator | ++ semver latest 9.0.0 2025-06-02 12:55:51.168560 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-02 12:55:51.168662 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 12:55:51.169689 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 12:55:52.872912 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:55:52.873016 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:55:52.873031 | orchestrator | Registering Redlock._release_script 2025-06-02 12:55:52.931211 | orchestrator | 2025-06-02 12:55:52 | INFO  | Task 4779dfb7-ded1-4b82-bab9-6bee42d9bd31 (operator) was prepared for execution. 2025-06-02 12:55:52.931323 | orchestrator | 2025-06-02 12:55:52 | INFO  | It takes a moment until task 4779dfb7-ded1-4b82-bab9-6bee42d9bd31 (operator) has been started and output is visible here. 2025-06-02 12:55:56.893761 | orchestrator | 2025-06-02 12:55:56.897766 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 12:55:56.897805 | orchestrator | 2025-06-02 12:55:56.897820 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:55:56.898712 | orchestrator | Monday 02 June 2025 12:55:56 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-02 12:56:00.148427 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:00.148982 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:00.149823 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:00.152007 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:00.152046 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:00.152058 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:00.153209 | orchestrator | 2025-06-02 12:56:00.154380 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 12:56:00.155055 | orchestrator | Monday 02 June 2025 12:56:00 +0000 (0:00:03.257) 0:00:03.401 *********** 2025-06-02 12:56:01.001676 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:01.001881 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:01.003069 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:01.004977 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:01.005526 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:01.006320 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:01.007275 | orchestrator | 2025-06-02 12:56:01.007984 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 12:56:01.008715 | orchestrator | 2025-06-02 12:56:01.009390 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 12:56:01.010481 | orchestrator | Monday 02 June 2025 12:56:00 +0000 (0:00:00.852) 0:00:04.254 *********** 2025-06-02 12:56:01.073913 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:01.096129 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:01.126282 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:01.165322 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:01.165710 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:01.167080 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:01.168884 | orchestrator | 2025-06-02 12:56:01.169271 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 12:56:01.169710 | orchestrator | Monday 02 June 2025 12:56:01 +0000 (0:00:00.164) 0:00:04.418 *********** 2025-06-02 12:56:01.246449 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:01.271047 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:01.307668 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:01.391874 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:01.395269 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:01.395316 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:01.395411 | orchestrator | 2025-06-02 12:56:01.395909 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 12:56:01.396132 | orchestrator | Monday 02 June 2025 12:56:01 +0000 (0:00:00.224) 0:00:04.643 *********** 2025-06-02 12:56:02.004067 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:02.004227 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:02.006802 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:02.006829 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:02.006841 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:02.006853 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:02.006864 | orchestrator | 2025-06-02 12:56:02.007098 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 12:56:02.007732 | orchestrator | Monday 02 June 2025 12:56:01 +0000 (0:00:00.610) 0:00:05.253 *********** 2025-06-02 12:56:02.811673 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:02.812622 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:02.813476 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:02.816071 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:02.816813 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:02.817561 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:02.818857 | orchestrator | 2025-06-02 12:56:02.819804 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 12:56:02.820221 | orchestrator | Monday 02 June 2025 12:56:02 +0000 (0:00:00.809) 0:00:06.063 *********** 2025-06-02 12:56:04.028621 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 12:56:04.028829 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 12:56:04.030076 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 12:56:04.031618 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 12:56:04.032591 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 12:56:04.033861 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 12:56:04.034087 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 12:56:04.034720 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 12:56:04.035860 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 12:56:04.037604 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 12:56:04.038245 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 12:56:04.038713 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 12:56:04.039376 | orchestrator | 2025-06-02 12:56:04.039786 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 12:56:04.040262 | orchestrator | Monday 02 June 2025 12:56:04 +0000 (0:00:01.216) 0:00:07.280 *********** 2025-06-02 12:56:05.275711 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:05.276297 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:05.276723 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:05.276822 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:05.279831 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:05.279919 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:05.281517 | orchestrator | 2025-06-02 12:56:05.282109 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 12:56:05.282498 | orchestrator | Monday 02 June 2025 12:56:05 +0000 (0:00:01.248) 0:00:08.528 *********** 2025-06-02 12:56:06.444686 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 12:56:06.445505 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 12:56:06.448594 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 12:56:06.568073 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.568730 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.570329 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.570736 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.570994 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.571860 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:56:06.572921 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.573705 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.574215 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.575221 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.575576 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.576028 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 12:56:06.576625 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.577368 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.577999 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.578649 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.578858 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.579440 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:56:06.579983 | orchestrator | 2025-06-02 12:56:06.580242 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 12:56:06.580752 | orchestrator | Monday 02 June 2025 12:56:06 +0000 (0:00:01.292) 0:00:09.821 *********** 2025-06-02 12:56:07.152726 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:07.152887 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:07.153358 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:07.154075 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:07.154960 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:07.155333 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:07.155697 | orchestrator | 2025-06-02 12:56:07.156987 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 12:56:07.157007 | orchestrator | Monday 02 June 2025 12:56:07 +0000 (0:00:00.585) 0:00:10.406 *********** 2025-06-02 12:56:07.222959 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:56:07.254488 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:56:07.275828 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:56:07.324827 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:07.327012 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:07.328459 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:07.329960 | orchestrator | 2025-06-02 12:56:07.330835 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 12:56:07.332468 | orchestrator | Monday 02 June 2025 12:56:07 +0000 (0:00:00.172) 0:00:10.578 *********** 2025-06-02 12:56:08.018754 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 12:56:08.019043 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:08.019695 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 12:56:08.019727 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 12:56:08.020118 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 12:56:08.020786 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:08.020815 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:08.021391 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:08.021718 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 12:56:08.021738 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 12:56:08.022116 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:08.023600 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:08.023633 | orchestrator | 2025-06-02 12:56:08.023647 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 12:56:08.023660 | orchestrator | Monday 02 June 2025 12:56:08 +0000 (0:00:00.693) 0:00:11.272 *********** 2025-06-02 12:56:08.068741 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:56:08.093201 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:56:08.115612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:56:08.180852 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:08.181677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:08.182520 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:08.183511 | orchestrator | 2025-06-02 12:56:08.184171 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 12:56:08.184808 | orchestrator | Monday 02 June 2025 12:56:08 +0000 (0:00:00.163) 0:00:11.435 *********** 2025-06-02 12:56:08.234760 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:56:08.258986 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:56:08.285376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:56:08.310367 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:08.343341 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:08.344039 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:08.345436 | orchestrator | 2025-06-02 12:56:08.346770 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 12:56:08.347744 | orchestrator | Monday 02 June 2025 12:56:08 +0000 (0:00:00.161) 0:00:11.596 *********** 2025-06-02 12:56:08.424002 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:56:08.449407 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:56:08.474268 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:56:08.506619 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:08.507483 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:08.508746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:08.509688 | orchestrator | 2025-06-02 12:56:08.510839 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 12:56:08.511472 | orchestrator | Monday 02 June 2025 12:56:08 +0000 (0:00:00.163) 0:00:11.760 *********** 2025-06-02 12:56:09.135045 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:09.136398 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:09.138438 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:09.139100 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:09.140645 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:09.141504 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:09.142772 | orchestrator | 2025-06-02 12:56:09.143556 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 12:56:09.144249 | orchestrator | Monday 02 June 2025 12:56:09 +0000 (0:00:00.627) 0:00:12.387 *********** 2025-06-02 12:56:09.242580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:56:09.271359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:56:09.407597 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:56:09.407699 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:09.407715 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:09.408610 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:09.410663 | orchestrator | 2025-06-02 12:56:09.412068 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:56:09.412967 | orchestrator | 2025-06-02 12:56:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:56:09.413972 | orchestrator | 2025-06-02 12:56:09 | INFO  | Please wait and do not abort execution. 2025-06-02 12:56:09.415315 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.416183 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.417312 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.418627 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.420566 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.421820 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:56:09.422988 | orchestrator | 2025-06-02 12:56:09.424102 | orchestrator | 2025-06-02 12:56:09.425214 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:56:09.426116 | orchestrator | Monday 02 June 2025 12:56:09 +0000 (0:00:00.269) 0:00:12.657 *********** 2025-06-02 12:56:09.427253 | orchestrator | =============================================================================== 2025-06-02 12:56:09.427947 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-06-02 12:56:09.428625 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-06-02 12:56:09.429853 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-06-02 12:56:09.430260 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-06-02 12:56:09.430974 | orchestrator | Do not require tty for all users ---------------------------------------- 0.85s 2025-06-02 12:56:09.431870 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-02 12:56:09.432589 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-06-02 12:56:09.432954 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-06-02 12:56:09.433608 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-06-02 12:56:09.434205 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-02 12:56:09.434690 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-06-02 12:56:09.435138 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2025-06-02 12:56:09.435723 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-06-02 12:56:09.436214 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-02 12:56:09.436997 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-06-02 12:56:09.437372 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-06-02 12:56:09.437851 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-06-02 12:56:09.941389 | orchestrator | + osism apply --environment custom facts 2025-06-02 12:56:11.608999 | orchestrator | 2025-06-02 12:56:11 | INFO  | Trying to run play facts in environment custom 2025-06-02 12:56:11.614266 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:56:11.614349 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:56:11.614364 | orchestrator | Registering Redlock._release_script 2025-06-02 12:56:11.674088 | orchestrator | 2025-06-02 12:56:11 | INFO  | Task 2edc367e-de9f-4904-a24f-ed8dc8affb8c (facts) was prepared for execution. 2025-06-02 12:56:11.674315 | orchestrator | 2025-06-02 12:56:11 | INFO  | It takes a moment until task 2edc367e-de9f-4904-a24f-ed8dc8affb8c (facts) has been started and output is visible here. 2025-06-02 12:56:15.584806 | orchestrator | 2025-06-02 12:56:15.585745 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 12:56:15.587401 | orchestrator | 2025-06-02 12:56:15.588465 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:56:15.591852 | orchestrator | Monday 02 June 2025 12:56:15 +0000 (0:00:00.086) 0:00:00.086 *********** 2025-06-02 12:56:17.058254 | orchestrator | ok: [testbed-manager] 2025-06-02 12:56:17.058435 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:17.059743 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:17.060684 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:17.061622 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:17.062869 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:17.064480 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:17.065284 | orchestrator | 2025-06-02 12:56:17.066265 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 12:56:17.067316 | orchestrator | Monday 02 June 2025 12:56:17 +0000 (0:00:01.476) 0:00:01.562 *********** 2025-06-02 12:56:18.292609 | orchestrator | ok: [testbed-manager] 2025-06-02 12:56:18.294947 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:18.294989 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:18.294997 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:18.295562 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:18.296339 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:18.296742 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:18.297540 | orchestrator | 2025-06-02 12:56:18.298740 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 12:56:18.299429 | orchestrator | 2025-06-02 12:56:18.299902 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:56:18.300606 | orchestrator | Monday 02 June 2025 12:56:18 +0000 (0:00:01.233) 0:00:02.796 *********** 2025-06-02 12:56:18.411684 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:18.412093 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:18.412751 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:18.413413 | orchestrator | 2025-06-02 12:56:18.414798 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:56:18.414959 | orchestrator | Monday 02 June 2025 12:56:18 +0000 (0:00:00.122) 0:00:02.918 *********** 2025-06-02 12:56:18.617467 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:18.618399 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:18.618432 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:18.618960 | orchestrator | 2025-06-02 12:56:18.619374 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:56:18.620667 | orchestrator | Monday 02 June 2025 12:56:18 +0000 (0:00:00.206) 0:00:03.125 *********** 2025-06-02 12:56:18.825577 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:18.825763 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:18.826320 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:18.827499 | orchestrator | 2025-06-02 12:56:18.827532 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:56:18.827546 | orchestrator | Monday 02 June 2025 12:56:18 +0000 (0:00:00.206) 0:00:03.331 *********** 2025-06-02 12:56:18.986986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 12:56:18.988107 | orchestrator | 2025-06-02 12:56:18.988703 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:56:18.991884 | orchestrator | Monday 02 June 2025 12:56:18 +0000 (0:00:00.162) 0:00:03.494 *********** 2025-06-02 12:56:19.421378 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:19.421481 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:19.421495 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:19.423638 | orchestrator | 2025-06-02 12:56:19.425288 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:56:19.425318 | orchestrator | Monday 02 June 2025 12:56:19 +0000 (0:00:00.431) 0:00:03.925 *********** 2025-06-02 12:56:19.536594 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:19.536752 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:19.537256 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:19.537612 | orchestrator | 2025-06-02 12:56:19.537976 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:56:19.538645 | orchestrator | Monday 02 June 2025 12:56:19 +0000 (0:00:00.118) 0:00:04.044 *********** 2025-06-02 12:56:20.564028 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:20.564212 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:20.564229 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:20.564242 | orchestrator | 2025-06-02 12:56:20.564255 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:56:20.564335 | orchestrator | Monday 02 June 2025 12:56:20 +0000 (0:00:01.024) 0:00:05.068 *********** 2025-06-02 12:56:21.046876 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:21.048675 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:21.051056 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:21.051896 | orchestrator | 2025-06-02 12:56:21.052852 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:56:21.053770 | orchestrator | Monday 02 June 2025 12:56:21 +0000 (0:00:00.483) 0:00:05.551 *********** 2025-06-02 12:56:22.069953 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:22.070397 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:22.071263 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:22.072422 | orchestrator | 2025-06-02 12:56:22.073603 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:56:22.075192 | orchestrator | Monday 02 June 2025 12:56:22 +0000 (0:00:01.023) 0:00:06.575 *********** 2025-06-02 12:56:35.267748 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:35.267878 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:35.267895 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:35.267907 | orchestrator | 2025-06-02 12:56:35.267919 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 12:56:35.267930 | orchestrator | Monday 02 June 2025 12:56:35 +0000 (0:00:13.193) 0:00:19.768 *********** 2025-06-02 12:56:35.364984 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:56:35.369062 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:56:35.369173 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:56:35.369189 | orchestrator | 2025-06-02 12:56:35.369203 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 12:56:35.369215 | orchestrator | Monday 02 June 2025 12:56:35 +0000 (0:00:00.104) 0:00:19.872 *********** 2025-06-02 12:56:42.311260 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:42.311688 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:42.312610 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:42.312703 | orchestrator | 2025-06-02 12:56:42.315266 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:56:42.315822 | orchestrator | Monday 02 June 2025 12:56:42 +0000 (0:00:06.944) 0:00:26.817 *********** 2025-06-02 12:56:42.733026 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:42.733278 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:42.734203 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:42.735096 | orchestrator | 2025-06-02 12:56:42.735699 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 12:56:42.736378 | orchestrator | Monday 02 June 2025 12:56:42 +0000 (0:00:00.422) 0:00:27.240 *********** 2025-06-02 12:56:46.156185 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 12:56:46.156303 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 12:56:46.157148 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 12:56:46.158395 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 12:56:46.159869 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 12:56:46.160638 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 12:56:46.161121 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 12:56:46.162161 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 12:56:46.162942 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 12:56:46.163495 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:56:46.163980 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:56:46.164643 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:56:46.165150 | orchestrator | 2025-06-02 12:56:46.165620 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 12:56:46.166052 | orchestrator | Monday 02 June 2025 12:56:46 +0000 (0:00:03.419) 0:00:30.659 *********** 2025-06-02 12:56:47.297158 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:47.299842 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:47.299882 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:47.300501 | orchestrator | 2025-06-02 12:56:47.301138 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 12:56:47.301937 | orchestrator | 2025-06-02 12:56:47.302539 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:56:47.303330 | orchestrator | Monday 02 June 2025 12:56:47 +0000 (0:00:01.143) 0:00:31.803 *********** 2025-06-02 12:56:51.049871 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:51.050008 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:51.051287 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:51.052303 | orchestrator | ok: [testbed-manager] 2025-06-02 12:56:51.052694 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:51.053731 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:51.054383 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:51.055152 | orchestrator | 2025-06-02 12:56:51.056144 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:56:51.056162 | orchestrator | 2025-06-02 12:56:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:56:51.056169 | orchestrator | 2025-06-02 12:56:51 | INFO  | Please wait and do not abort execution. 2025-06-02 12:56:51.056776 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:56:51.057327 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:56:51.058490 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:56:51.059007 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:56:51.059672 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:56:51.060623 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:56:51.061168 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:56:51.061989 | orchestrator | 2025-06-02 12:56:51.062661 | orchestrator | 2025-06-02 12:56:51.063295 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:56:51.064059 | orchestrator | Monday 02 June 2025 12:56:51 +0000 (0:00:03.753) 0:00:35.557 *********** 2025-06-02 12:56:51.064435 | orchestrator | =============================================================================== 2025-06-02 12:56:51.065141 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.19s 2025-06-02 12:56:51.065517 | orchestrator | Install required packages (Debian) -------------------------------------- 6.95s 2025-06-02 12:56:51.065973 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.75s 2025-06-02 12:56:51.066469 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-06-02 12:56:51.066917 | orchestrator | Create custom facts directory ------------------------------------------- 1.48s 2025-06-02 12:56:51.067550 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-06-02 12:56:51.067997 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.14s 2025-06-02 12:56:51.068639 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-06-02 12:56:51.069041 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-06-02 12:56:51.069512 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-06-02 12:56:51.070002 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-02 12:56:51.070517 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-02 12:56:51.071536 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-06-02 12:56:51.071828 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-06-02 12:56:51.072686 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-06-02 12:56:51.072782 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-06-02 12:56:51.073278 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-06-02 12:56:51.073907 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-06-02 12:56:51.575374 | orchestrator | + osism apply bootstrap 2025-06-02 12:56:53.326498 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:56:53.326610 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:56:53.326628 | orchestrator | Registering Redlock._release_script 2025-06-02 12:56:53.387525 | orchestrator | 2025-06-02 12:56:53 | INFO  | Task 7fbe6d4d-4cad-41f7-9532-cd68aac32dcc (bootstrap) was prepared for execution. 2025-06-02 12:56:53.387641 | orchestrator | 2025-06-02 12:56:53 | INFO  | It takes a moment until task 7fbe6d4d-4cad-41f7-9532-cd68aac32dcc (bootstrap) has been started and output is visible here. 2025-06-02 12:56:57.192935 | orchestrator | 2025-06-02 12:56:57.193040 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 12:56:57.193162 | orchestrator | 2025-06-02 12:56:57.193925 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 12:56:57.196166 | orchestrator | Monday 02 June 2025 12:56:57 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-02 12:56:57.260423 | orchestrator | ok: [testbed-manager] 2025-06-02 12:56:57.291817 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:57.309532 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:57.335020 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:57.404404 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:57.404488 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:57.404502 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:57.405473 | orchestrator | 2025-06-02 12:56:57.405870 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 12:56:57.406589 | orchestrator | 2025-06-02 12:56:57.407452 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:56:57.408446 | orchestrator | Monday 02 June 2025 12:56:57 +0000 (0:00:00.212) 0:00:00.373 *********** 2025-06-02 12:57:01.165188 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:01.166163 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:01.167174 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:01.169292 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:01.173409 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:01.174512 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:01.175494 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:01.175795 | orchestrator | 2025-06-02 12:57:01.176431 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 12:57:01.177288 | orchestrator | 2025-06-02 12:57:01.178191 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:57:01.179231 | orchestrator | Monday 02 June 2025 12:57:01 +0000 (0:00:03.764) 0:00:04.138 *********** 2025-06-02 12:57:01.249860 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 12:57:01.297983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 12:57:01.298632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 12:57:01.298734 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 12:57:01.299058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 12:57:01.302002 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 12:57:01.348565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 12:57:01.349279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 12:57:01.350317 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 12:57:01.354572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 12:57:01.354679 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 12:57:01.357661 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 12:57:01.358305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 12:57:01.358858 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 12:57:01.359563 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 12:57:01.394329 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:01.397222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 12:57:01.397665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 12:57:01.398145 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 12:57:01.399161 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 12:57:01.400957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 12:57:01.683214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 12:57:01.683313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 12:57:01.683328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 12:57:01.683339 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 12:57:01.683408 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:01.684133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 12:57:01.684242 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 12:57:01.685031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 12:57:01.685340 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 12:57:01.686268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 12:57:01.686496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 12:57:01.686913 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:01.687125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 12:57:01.687905 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 12:57:01.688104 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 12:57:01.688950 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:01.689137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 12:57:01.689673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 12:57:01.689876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 12:57:01.690326 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 12:57:01.690812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 12:57:01.691246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 12:57:01.691768 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 12:57:01.692173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 12:57:01.693015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 12:57:01.693186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 12:57:01.693484 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 12:57:01.694154 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 12:57:01.694433 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:01.694721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 12:57:01.695247 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:01.696010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 12:57:01.696230 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 12:57:01.696984 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 12:57:01.697329 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:01.697613 | orchestrator | 2025-06-02 12:57:01.698221 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 12:57:01.699961 | orchestrator | 2025-06-02 12:57:01.699991 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 12:57:01.700003 | orchestrator | Monday 02 June 2025 12:57:01 +0000 (0:00:00.516) 0:00:04.655 *********** 2025-06-02 12:57:02.930322 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:02.931015 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:02.931795 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:02.932632 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:02.933651 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:02.934803 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:02.936700 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:02.936757 | orchestrator | 2025-06-02 12:57:02.936773 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 12:57:02.937441 | orchestrator | Monday 02 June 2025 12:57:02 +0000 (0:00:01.246) 0:00:05.901 *********** 2025-06-02 12:57:04.152494 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:04.152599 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:04.152614 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:04.153699 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:04.154264 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:04.154854 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:04.156216 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:04.156250 | orchestrator | 2025-06-02 12:57:04.157344 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 12:57:04.157380 | orchestrator | Monday 02 June 2025 12:57:04 +0000 (0:00:01.217) 0:00:07.119 *********** 2025-06-02 12:57:04.429619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:04.430645 | orchestrator | 2025-06-02 12:57:04.431797 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 12:57:04.431931 | orchestrator | Monday 02 June 2025 12:57:04 +0000 (0:00:00.281) 0:00:07.400 *********** 2025-06-02 12:57:06.377956 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:06.382372 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:06.384017 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:06.384689 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:06.385696 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:06.386373 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:06.387346 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:06.389953 | orchestrator | 2025-06-02 12:57:06.390002 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 12:57:06.390105 | orchestrator | Monday 02 June 2025 12:57:06 +0000 (0:00:01.945) 0:00:09.345 *********** 2025-06-02 12:57:06.473506 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:06.685236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:06.685408 | orchestrator | 2025-06-02 12:57:06.690734 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 12:57:06.690778 | orchestrator | Monday 02 June 2025 12:57:06 +0000 (0:00:00.312) 0:00:09.657 *********** 2025-06-02 12:57:07.665579 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:07.666664 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:07.669396 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:07.670994 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:07.672172 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:07.677976 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:07.678250 | orchestrator | 2025-06-02 12:57:07.679528 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 12:57:07.679974 | orchestrator | Monday 02 June 2025 12:57:07 +0000 (0:00:00.978) 0:00:10.636 *********** 2025-06-02 12:57:07.753987 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:08.220248 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:08.220408 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:08.222374 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:08.222760 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:08.223656 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:08.224680 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:08.225570 | orchestrator | 2025-06-02 12:57:08.226273 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 12:57:08.227000 | orchestrator | Monday 02 June 2025 12:57:08 +0000 (0:00:00.554) 0:00:11.191 *********** 2025-06-02 12:57:08.318963 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:08.339173 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:08.366231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:08.627304 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:08.628740 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:08.629918 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:08.630539 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:08.633380 | orchestrator | 2025-06-02 12:57:08.633413 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 12:57:08.633427 | orchestrator | Monday 02 June 2025 12:57:08 +0000 (0:00:00.407) 0:00:11.598 *********** 2025-06-02 12:57:08.723040 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:08.748037 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:08.776866 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:08.801889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:08.874987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:08.876203 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:08.878517 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:08.879155 | orchestrator | 2025-06-02 12:57:08.880517 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 12:57:08.881640 | orchestrator | Monday 02 June 2025 12:57:08 +0000 (0:00:00.248) 0:00:11.847 *********** 2025-06-02 12:57:09.180606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:09.182209 | orchestrator | 2025-06-02 12:57:09.183795 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 12:57:09.184795 | orchestrator | Monday 02 June 2025 12:57:09 +0000 (0:00:00.304) 0:00:12.151 *********** 2025-06-02 12:57:09.505645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:09.505880 | orchestrator | 2025-06-02 12:57:09.507885 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 12:57:09.508993 | orchestrator | Monday 02 June 2025 12:57:09 +0000 (0:00:00.324) 0:00:12.476 *********** 2025-06-02 12:57:10.670931 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:10.671041 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:10.671225 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:10.673218 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:10.674463 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:10.675335 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:10.675925 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:10.676946 | orchestrator | 2025-06-02 12:57:10.677860 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 12:57:10.679182 | orchestrator | Monday 02 June 2025 12:57:10 +0000 (0:00:01.164) 0:00:13.641 *********** 2025-06-02 12:57:10.745453 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:10.767761 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:10.793465 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:10.819911 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:10.866212 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:10.866545 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:10.867057 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:10.870109 | orchestrator | 2025-06-02 12:57:10.870434 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 12:57:10.871216 | orchestrator | Monday 02 June 2025 12:57:10 +0000 (0:00:00.198) 0:00:13.840 *********** 2025-06-02 12:57:11.389732 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:11.390373 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:11.391143 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:11.393454 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:11.394232 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:11.394636 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:11.396137 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:11.396664 | orchestrator | 2025-06-02 12:57:11.397785 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 12:57:11.398774 | orchestrator | Monday 02 June 2025 12:57:11 +0000 (0:00:00.521) 0:00:14.361 *********** 2025-06-02 12:57:11.510894 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:11.540128 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:11.568637 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:11.648029 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:11.649276 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:11.649415 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:11.650658 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:11.650796 | orchestrator | 2025-06-02 12:57:11.651269 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 12:57:11.652052 | orchestrator | Monday 02 June 2025 12:57:11 +0000 (0:00:00.259) 0:00:14.620 *********** 2025-06-02 12:57:12.205360 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:12.205859 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:12.206880 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:12.207971 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:12.208737 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:12.209619 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:12.209903 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:12.210903 | orchestrator | 2025-06-02 12:57:12.211556 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 12:57:12.212040 | orchestrator | Monday 02 June 2025 12:57:12 +0000 (0:00:00.557) 0:00:15.178 *********** 2025-06-02 12:57:13.275993 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:13.277283 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:13.278251 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:13.279229 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:13.280179 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:13.281023 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:13.283893 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:13.283927 | orchestrator | 2025-06-02 12:57:13.283941 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 12:57:13.283954 | orchestrator | Monday 02 June 2025 12:57:13 +0000 (0:00:01.069) 0:00:16.247 *********** 2025-06-02 12:57:14.334584 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:14.334776 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:14.335168 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:14.335964 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:14.337093 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:14.338337 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:14.339350 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:14.340847 | orchestrator | 2025-06-02 12:57:14.342178 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 12:57:14.343102 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:01.056) 0:00:17.304 *********** 2025-06-02 12:57:14.653923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:14.655429 | orchestrator | 2025-06-02 12:57:14.656458 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 12:57:14.658507 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:00.321) 0:00:17.626 *********** 2025-06-02 12:57:14.714112 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:15.796187 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:15.796366 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:15.797529 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:15.798586 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:15.799206 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:15.799858 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:15.800424 | orchestrator | 2025-06-02 12:57:15.801099 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:57:15.801520 | orchestrator | Monday 02 June 2025 12:57:15 +0000 (0:00:01.141) 0:00:18.768 *********** 2025-06-02 12:57:15.869767 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:15.885766 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:15.908150 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:15.931183 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:15.991694 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:15.994757 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:15.994782 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:15.994795 | orchestrator | 2025-06-02 12:57:15.994808 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:57:15.994821 | orchestrator | Monday 02 June 2025 12:57:15 +0000 (0:00:00.197) 0:00:18.966 *********** 2025-06-02 12:57:16.065920 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:16.108807 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:16.133838 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:16.185392 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:16.186552 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:16.188663 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:16.188850 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:16.190380 | orchestrator | 2025-06-02 12:57:16.191623 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:57:16.192486 | orchestrator | Monday 02 June 2025 12:57:16 +0000 (0:00:00.192) 0:00:19.159 *********** 2025-06-02 12:57:16.248592 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:16.295216 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:16.318456 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:16.365743 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:16.366567 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:16.367860 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:16.368997 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:16.370183 | orchestrator | 2025-06-02 12:57:16.370961 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:57:16.371706 | orchestrator | Monday 02 June 2025 12:57:16 +0000 (0:00:00.180) 0:00:19.339 *********** 2025-06-02 12:57:16.615614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:16.615782 | orchestrator | 2025-06-02 12:57:16.616851 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:57:16.617929 | orchestrator | Monday 02 June 2025 12:57:16 +0000 (0:00:00.248) 0:00:19.588 *********** 2025-06-02 12:57:17.121493 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:17.121750 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:17.123345 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:17.124023 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:17.125298 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:17.125564 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:17.127015 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:17.127711 | orchestrator | 2025-06-02 12:57:17.128737 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:57:17.129510 | orchestrator | Monday 02 June 2025 12:57:17 +0000 (0:00:00.502) 0:00:20.091 *********** 2025-06-02 12:57:17.230853 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:17.268159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:17.298773 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:17.367878 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:17.368108 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:17.368969 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:17.369722 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:17.369879 | orchestrator | 2025-06-02 12:57:17.370242 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:57:17.377624 | orchestrator | Monday 02 June 2025 12:57:17 +0000 (0:00:00.249) 0:00:20.340 *********** 2025-06-02 12:57:18.442427 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:18.442554 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:18.443797 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:18.445279 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:18.445972 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:18.446829 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:18.447747 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:18.448314 | orchestrator | 2025-06-02 12:57:18.449312 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:57:18.449721 | orchestrator | Monday 02 June 2025 12:57:18 +0000 (0:00:01.070) 0:00:21.411 *********** 2025-06-02 12:57:19.024580 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:19.024995 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:19.026362 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:19.026992 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:19.027530 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:19.028737 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:19.029711 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:19.030550 | orchestrator | 2025-06-02 12:57:19.031582 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:57:19.032550 | orchestrator | Monday 02 June 2025 12:57:19 +0000 (0:00:00.583) 0:00:21.995 *********** 2025-06-02 12:57:20.185186 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:20.185817 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:20.186888 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:20.187632 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:20.188448 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:20.189198 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:20.189887 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:20.190811 | orchestrator | 2025-06-02 12:57:20.191533 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:57:20.192191 | orchestrator | Monday 02 June 2025 12:57:20 +0000 (0:00:01.160) 0:00:23.155 *********** 2025-06-02 12:57:33.826373 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:33.826494 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:33.826510 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:33.826522 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:33.827871 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:33.828806 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:33.829506 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:33.830746 | orchestrator | 2025-06-02 12:57:33.831337 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 12:57:33.832245 | orchestrator | Monday 02 June 2025 12:57:33 +0000 (0:00:13.638) 0:00:36.793 *********** 2025-06-02 12:57:33.914507 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:33.942874 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:33.967396 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:33.999695 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:34.056915 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:34.057220 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:34.058338 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:34.059109 | orchestrator | 2025-06-02 12:57:34.059649 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 12:57:34.060140 | orchestrator | Monday 02 June 2025 12:57:34 +0000 (0:00:00.234) 0:00:37.028 *********** 2025-06-02 12:57:34.172203 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:34.202548 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:34.232562 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:34.261162 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:34.326539 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:34.326924 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:34.328347 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:34.332693 | orchestrator | 2025-06-02 12:57:34.332790 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 12:57:34.332806 | orchestrator | Monday 02 June 2025 12:57:34 +0000 (0:00:00.270) 0:00:37.299 *********** 2025-06-02 12:57:34.410760 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:34.447630 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:34.470788 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:34.498456 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:34.567210 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:34.568302 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:34.572632 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:34.572691 | orchestrator | 2025-06-02 12:57:34.573011 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 12:57:34.573864 | orchestrator | Monday 02 June 2025 12:57:34 +0000 (0:00:00.240) 0:00:37.539 *********** 2025-06-02 12:57:34.894646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:34.895319 | orchestrator | 2025-06-02 12:57:34.896020 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 12:57:34.896596 | orchestrator | Monday 02 June 2025 12:57:34 +0000 (0:00:00.326) 0:00:37.866 *********** 2025-06-02 12:57:36.353516 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:36.353645 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:36.353948 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:36.355578 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:36.356932 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:36.357635 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:36.358427 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:36.359212 | orchestrator | 2025-06-02 12:57:36.360137 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 12:57:36.360770 | orchestrator | Monday 02 June 2025 12:57:36 +0000 (0:00:01.456) 0:00:39.322 *********** 2025-06-02 12:57:37.404676 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:37.405905 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:37.405934 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:37.406428 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:37.406865 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:37.407581 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:37.408219 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:37.408761 | orchestrator | 2025-06-02 12:57:37.409443 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 12:57:37.409975 | orchestrator | Monday 02 June 2025 12:57:37 +0000 (0:00:01.053) 0:00:40.375 *********** 2025-06-02 12:57:38.232021 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:38.232245 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:38.232278 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:38.232463 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:38.232637 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:38.233374 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:38.233954 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:38.234490 | orchestrator | 2025-06-02 12:57:38.234593 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 12:57:38.235668 | orchestrator | Monday 02 June 2025 12:57:38 +0000 (0:00:00.827) 0:00:41.203 *********** 2025-06-02 12:57:38.509260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:38.510511 | orchestrator | 2025-06-02 12:57:38.511096 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 12:57:38.511957 | orchestrator | Monday 02 June 2025 12:57:38 +0000 (0:00:00.278) 0:00:41.482 *********** 2025-06-02 12:57:39.483577 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:39.484498 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:39.485629 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:39.488561 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:39.489178 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:39.489467 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:39.489970 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:39.490919 | orchestrator | 2025-06-02 12:57:39.491224 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 12:57:39.492400 | orchestrator | Monday 02 June 2025 12:57:39 +0000 (0:00:00.972) 0:00:42.454 *********** 2025-06-02 12:57:39.563862 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:39.593221 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:39.617627 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:39.653433 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:39.799389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:39.800326 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:39.801099 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:39.802317 | orchestrator | 2025-06-02 12:57:39.803291 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 12:57:39.804513 | orchestrator | Monday 02 June 2025 12:57:39 +0000 (0:00:00.317) 0:00:42.772 *********** 2025-06-02 12:57:51.375849 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:51.375965 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:51.376729 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:51.377906 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:51.378790 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:51.379622 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:51.380427 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:51.381192 | orchestrator | 2025-06-02 12:57:51.382076 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 12:57:51.382441 | orchestrator | Monday 02 June 2025 12:57:51 +0000 (0:00:11.572) 0:00:54.344 *********** 2025-06-02 12:57:52.662392 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:52.662871 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:52.664380 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:52.665822 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:52.666387 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:52.668340 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:52.669055 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:52.670970 | orchestrator | 2025-06-02 12:57:52.672175 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 12:57:52.678659 | orchestrator | Monday 02 June 2025 12:57:52 +0000 (0:00:01.288) 0:00:55.633 *********** 2025-06-02 12:57:53.557599 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:53.557775 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:53.559198 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:53.560360 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:53.560618 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:53.561316 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:53.561999 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:53.562825 | orchestrator | 2025-06-02 12:57:53.565199 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 12:57:53.565971 | orchestrator | Monday 02 June 2025 12:57:53 +0000 (0:00:00.894) 0:00:56.527 *********** 2025-06-02 12:57:53.637049 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:53.670909 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:53.696969 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:53.728486 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:53.782913 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:53.783630 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:53.784637 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:53.785239 | orchestrator | 2025-06-02 12:57:53.785746 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 12:57:53.786492 | orchestrator | Monday 02 June 2025 12:57:53 +0000 (0:00:00.228) 0:00:56.756 *********** 2025-06-02 12:57:53.867082 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:53.892554 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:53.922909 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:53.947512 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:54.022456 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:54.022643 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:54.023691 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:54.027858 | orchestrator | 2025-06-02 12:57:54.027910 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 12:57:54.027925 | orchestrator | Monday 02 June 2025 12:57:54 +0000 (0:00:00.237) 0:00:56.994 *********** 2025-06-02 12:57:54.362417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:57:54.362518 | orchestrator | 2025-06-02 12:57:54.363096 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 12:57:54.363991 | orchestrator | Monday 02 June 2025 12:57:54 +0000 (0:00:00.339) 0:00:57.333 *********** 2025-06-02 12:57:56.003435 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:56.003813 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:56.004651 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:56.006604 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:56.007221 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:56.009240 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:56.010754 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:56.011750 | orchestrator | 2025-06-02 12:57:56.012082 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 12:57:56.013372 | orchestrator | Monday 02 June 2025 12:57:55 +0000 (0:00:01.639) 0:00:58.972 *********** 2025-06-02 12:57:56.576977 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:56.577286 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:56.577880 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:56.578752 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:56.579569 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:56.580387 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:56.580734 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:56.581168 | orchestrator | 2025-06-02 12:57:56.581814 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 12:57:56.582513 | orchestrator | Monday 02 June 2025 12:57:56 +0000 (0:00:00.575) 0:00:59.548 *********** 2025-06-02 12:57:56.665648 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:56.701441 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:56.738180 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:56.767044 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:56.827970 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:56.829364 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:56.830632 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:56.831539 | orchestrator | 2025-06-02 12:57:56.832416 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 12:57:56.833699 | orchestrator | Monday 02 June 2025 12:57:56 +0000 (0:00:00.252) 0:00:59.801 *********** 2025-06-02 12:57:57.987106 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:57.988608 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:57.989606 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:57.990911 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:57.991832 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:57.992379 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:57.993425 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:57.994320 | orchestrator | 2025-06-02 12:57:57.994667 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 12:57:57.995306 | orchestrator | Monday 02 June 2025 12:57:57 +0000 (0:00:01.157) 0:01:00.958 *********** 2025-06-02 12:57:59.670921 | orchestrator | changed: [testbed-manager] 2025-06-02 12:57:59.671540 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:59.671882 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:59.672361 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:59.673188 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:59.673829 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:59.674507 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:59.674912 | orchestrator | 2025-06-02 12:57:59.675650 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 12:57:59.675792 | orchestrator | Monday 02 June 2025 12:57:59 +0000 (0:00:01.682) 0:01:02.640 *********** 2025-06-02 12:58:02.014083 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:02.014339 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:02.015333 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:02.015514 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:02.018552 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:02.019435 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:02.020788 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:02.021506 | orchestrator | 2025-06-02 12:58:02.022097 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 12:58:02.022651 | orchestrator | Monday 02 June 2025 12:58:02 +0000 (0:00:02.342) 0:01:04.982 *********** 2025-06-02 12:58:39.782789 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:39.782912 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:39.782935 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:39.783013 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:39.783033 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:39.783052 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:39.783172 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:39.784691 | orchestrator | 2025-06-02 12:58:39.784863 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 12:58:39.785454 | orchestrator | Monday 02 June 2025 12:58:39 +0000 (0:00:37.766) 0:01:42.749 *********** 2025-06-02 12:59:53.305192 | orchestrator | changed: [testbed-manager] 2025-06-02 12:59:53.305328 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:59:53.305345 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:59:53.305422 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:59:53.306205 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:59:53.306594 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:59:53.307244 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:59:53.307719 | orchestrator | 2025-06-02 12:59:53.308479 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 12:59:53.308733 | orchestrator | Monday 02 June 2025 12:59:53 +0000 (0:01:13.523) 0:02:56.273 *********** 2025-06-02 12:59:55.076351 | orchestrator | ok: [testbed-manager] 2025-06-02 12:59:55.076467 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:59:55.076543 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:59:55.078379 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:59:55.078415 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:59:55.078586 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:59:55.078946 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:59:55.079257 | orchestrator | 2025-06-02 12:59:55.079461 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 12:59:55.079939 | orchestrator | Monday 02 June 2025 12:59:55 +0000 (0:00:01.774) 0:02:58.047 *********** 2025-06-02 13:00:06.958554 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:06.958677 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:06.959397 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:06.961972 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:06.964001 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:06.964993 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:06.965359 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:06.966276 | orchestrator | 2025-06-02 13:00:06.967079 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 13:00:06.967792 | orchestrator | Monday 02 June 2025 13:00:06 +0000 (0:00:11.876) 0:03:09.924 *********** 2025-06-02 13:00:07.341141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 13:00:07.342075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 13:00:07.342767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 13:00:07.345588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 13:00:07.350147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 13:00:07.350765 | orchestrator | 2025-06-02 13:00:07.351176 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 13:00:07.352137 | orchestrator | Monday 02 June 2025 13:00:07 +0000 (0:00:00.389) 0:03:10.313 *********** 2025-06-02 13:00:07.400729 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:00:07.431501 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:07.431594 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:00:07.466469 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:00:07.466619 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:00:07.490835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:00:07.490980 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:00:07.518121 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:00:08.019637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:00:08.019807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:00:08.020504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:00:08.021638 | orchestrator | 2025-06-02 13:00:08.022364 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 13:00:08.023051 | orchestrator | Monday 02 June 2025 13:00:08 +0000 (0:00:00.678) 0:03:10.991 *********** 2025-06-02 13:00:08.095905 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:00:08.096008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:00:08.096436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:00:08.096930 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:00:08.097341 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:00:08.097624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:00:08.098008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:00:08.098707 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:00:08.098992 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:00:08.099448 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:00:08.100185 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:00:08.100378 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:00:08.100888 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:00:08.135357 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:08.136054 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:00:08.136835 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:00:08.137271 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:00:08.139489 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:00:08.139514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:00:08.139525 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:00:08.139877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:00:08.141110 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:00:08.141192 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:00:08.169756 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:00:08.170766 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:00:08.171007 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:00:08.171459 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:00:08.171748 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:00:08.172252 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:00:08.172605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:00:08.172945 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:00:08.173218 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:00:08.176695 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:00:08.176738 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:00:08.176748 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:00:08.203917 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:00:08.204791 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:00:08.206421 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:00:08.207413 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:00:08.208540 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:00:08.209731 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:00:08.210687 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:00:08.212484 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:00:08.233818 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:00:12.649263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:00:12.649402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:00:12.649660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:00:12.649940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:00:12.650384 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:00:12.651705 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:00:12.651826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:00:12.653059 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:00:12.654111 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:00:12.654159 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:00:12.654172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:00:12.654183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:00:12.654809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:00:12.655183 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:00:12.655598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:00:12.656032 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:00:12.656808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:00:12.657210 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:00:12.657427 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:00:12.658119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:00:12.658240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:00:12.658923 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:00:12.659161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:00:12.659950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:00:12.660775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:00:12.660986 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:00:12.661012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:00:12.661027 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:00:12.661946 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:00:12.662577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:00:12.662860 | orchestrator | 2025-06-02 13:00:12.663758 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 13:00:12.663783 | orchestrator | Monday 02 June 2025 13:00:12 +0000 (0:00:04.628) 0:03:15.620 *********** 2025-06-02 13:00:14.157312 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.157484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.159901 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.160767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.162008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.162155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.163340 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:00:14.163457 | orchestrator | 2025-06-02 13:00:14.164748 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 13:00:14.164927 | orchestrator | Monday 02 June 2025 13:00:14 +0000 (0:00:01.507) 0:03:17.128 *********** 2025-06-02 13:00:14.227975 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:00:14.250091 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:14.337684 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:00:14.338106 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:00:14.667574 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:00:14.668064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:00:14.669140 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:00:14.670648 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:00:14.671435 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:00:14.671886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:00:14.672581 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:00:14.674126 | orchestrator | 2025-06-02 13:00:14.675729 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 13:00:14.676490 | orchestrator | Monday 02 June 2025 13:00:14 +0000 (0:00:00.511) 0:03:17.639 *********** 2025-06-02 13:00:14.735785 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:00:14.767566 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:14.836619 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:00:15.248669 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:00:15.249803 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:00:15.251277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:00:15.252392 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:00:15.254348 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:00:15.254409 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:00:15.255578 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:00:15.256043 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:00:15.256862 | orchestrator | 2025-06-02 13:00:15.258461 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 13:00:15.258517 | orchestrator | Monday 02 June 2025 13:00:15 +0000 (0:00:00.582) 0:03:18.222 *********** 2025-06-02 13:00:15.344462 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:15.371514 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:00:15.395651 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:00:15.424592 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:00:15.558661 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:00:15.559290 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:00:15.561415 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:00:15.561447 | orchestrator | 2025-06-02 13:00:15.562606 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 13:00:15.563440 | orchestrator | Monday 02 June 2025 13:00:15 +0000 (0:00:00.308) 0:03:18.530 *********** 2025-06-02 13:00:21.260149 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:21.261223 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:21.263402 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:21.264419 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:21.265539 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:21.268059 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:21.269340 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:21.269997 | orchestrator | 2025-06-02 13:00:21.271436 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 13:00:21.271526 | orchestrator | Monday 02 June 2025 13:00:21 +0000 (0:00:05.701) 0:03:24.231 *********** 2025-06-02 13:00:21.339443 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 13:00:21.339907 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 13:00:21.376300 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:21.417784 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:00:21.418906 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 13:00:21.461224 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 13:00:21.461902 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:00:21.461999 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 13:00:21.495009 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:00:21.568489 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:00:21.568589 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 13:00:21.568603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:00:21.568696 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 13:00:21.569220 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:00:21.573583 | orchestrator | 2025-06-02 13:00:21.573614 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 13:00:21.573628 | orchestrator | Monday 02 June 2025 13:00:21 +0000 (0:00:00.309) 0:03:24.541 *********** 2025-06-02 13:00:22.592510 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 13:00:22.593013 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 13:00:22.594977 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 13:00:22.595080 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 13:00:22.595192 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 13:00:22.595859 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 13:00:22.596359 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 13:00:22.596969 | orchestrator | 2025-06-02 13:00:22.598396 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 13:00:22.598423 | orchestrator | Monday 02 June 2025 13:00:22 +0000 (0:00:01.022) 0:03:25.563 *********** 2025-06-02 13:00:23.081173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:00:23.081306 | orchestrator | 2025-06-02 13:00:23.082282 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 13:00:23.085656 | orchestrator | Monday 02 June 2025 13:00:23 +0000 (0:00:00.488) 0:03:26.052 *********** 2025-06-02 13:00:24.395643 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:24.395764 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:24.397581 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:24.398321 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:24.399522 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:24.399790 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:24.400721 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:24.401554 | orchestrator | 2025-06-02 13:00:24.402119 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 13:00:24.402801 | orchestrator | Monday 02 June 2025 13:00:24 +0000 (0:00:01.313) 0:03:27.366 *********** 2025-06-02 13:00:25.033904 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:25.034012 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:25.034087 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:25.034172 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:25.034226 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:25.035045 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:25.035920 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:25.036432 | orchestrator | 2025-06-02 13:00:25.036953 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 13:00:25.037465 | orchestrator | Monday 02 June 2025 13:00:25 +0000 (0:00:00.639) 0:03:28.005 *********** 2025-06-02 13:00:25.668491 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:25.669799 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:25.670541 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:25.671802 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:25.672338 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:25.673367 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:25.674739 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:25.675598 | orchestrator | 2025-06-02 13:00:25.676577 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 13:00:25.677487 | orchestrator | Monday 02 June 2025 13:00:25 +0000 (0:00:00.635) 0:03:28.640 *********** 2025-06-02 13:00:26.284171 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:26.285624 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:26.285940 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:26.287790 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:26.288947 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:26.289695 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:26.290539 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:26.291055 | orchestrator | 2025-06-02 13:00:26.291607 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 13:00:26.292179 | orchestrator | Monday 02 June 2025 13:00:26 +0000 (0:00:00.612) 0:03:29.253 *********** 2025-06-02 13:00:27.279684 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867862.6323326, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.282712 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867916.889431, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.282775 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867924.0435672, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.282798 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867915.176523, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.283963 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867913.3990238, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.284825 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867916.3346648, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.287471 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867924.8321216, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.287538 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867884.795689, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.288613 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867814.721762, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.290315 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867813.8807704, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.292262 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867823.809816, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.293296 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867814.1449397, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.294614 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867817.3156717, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.295562 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867821.1111503, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:00:27.296264 | orchestrator | 2025-06-02 13:00:27.297315 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 13:00:27.298298 | orchestrator | Monday 02 June 2025 13:00:27 +0000 (0:00:00.997) 0:03:30.251 *********** 2025-06-02 13:00:28.402326 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:28.405757 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:28.406539 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:28.407001 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:28.407886 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:28.408575 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:28.409298 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:28.409763 | orchestrator | 2025-06-02 13:00:28.410517 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 13:00:28.411493 | orchestrator | Monday 02 June 2025 13:00:28 +0000 (0:00:01.122) 0:03:31.374 *********** 2025-06-02 13:00:29.542480 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:29.542609 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:29.543490 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:29.544530 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:29.545496 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:29.546370 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:29.547310 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:29.550798 | orchestrator | 2025-06-02 13:00:29.550899 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 13:00:29.550916 | orchestrator | Monday 02 June 2025 13:00:29 +0000 (0:00:01.137) 0:03:32.511 *********** 2025-06-02 13:00:30.690912 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:30.691331 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:30.694687 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:30.695192 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:30.696538 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:30.697415 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:30.698369 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:30.699242 | orchestrator | 2025-06-02 13:00:30.699888 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 13:00:30.700156 | orchestrator | Monday 02 June 2025 13:00:30 +0000 (0:00:01.150) 0:03:33.662 *********** 2025-06-02 13:00:30.797157 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:00:30.850094 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:00:30.881822 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:00:30.926102 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:00:31.003238 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:00:31.003392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:00:31.004707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:00:31.006131 | orchestrator | 2025-06-02 13:00:31.006985 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 13:00:31.007932 | orchestrator | Monday 02 June 2025 13:00:30 +0000 (0:00:00.305) 0:03:33.968 *********** 2025-06-02 13:00:31.729934 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:31.730083 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:31.730102 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:31.731557 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:31.732190 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:31.733446 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:31.734194 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:31.734808 | orchestrator | 2025-06-02 13:00:31.735785 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 13:00:31.736736 | orchestrator | Monday 02 June 2025 13:00:31 +0000 (0:00:00.730) 0:03:34.699 *********** 2025-06-02 13:00:32.100106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:00:32.100489 | orchestrator | 2025-06-02 13:00:32.102182 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 13:00:32.103601 | orchestrator | Monday 02 June 2025 13:00:32 +0000 (0:00:00.372) 0:03:35.072 *********** 2025-06-02 13:00:39.972927 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:39.973114 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:39.973659 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:39.975742 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:39.975925 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:39.977419 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:39.979413 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:39.980570 | orchestrator | 2025-06-02 13:00:39.980806 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 13:00:39.981668 | orchestrator | Monday 02 June 2025 13:00:39 +0000 (0:00:07.870) 0:03:42.942 *********** 2025-06-02 13:00:41.114542 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:41.116948 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:41.118402 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:41.119005 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:41.120350 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:41.120982 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:41.121658 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:41.122459 | orchestrator | 2025-06-02 13:00:41.123000 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 13:00:41.123762 | orchestrator | Monday 02 June 2025 13:00:41 +0000 (0:00:01.144) 0:03:44.087 *********** 2025-06-02 13:00:42.091021 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:42.091381 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:42.092555 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:42.093621 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:42.094629 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:42.096064 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:42.096998 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:42.098668 | orchestrator | 2025-06-02 13:00:42.098792 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 13:00:42.099905 | orchestrator | Monday 02 June 2025 13:00:42 +0000 (0:00:00.974) 0:03:45.061 *********** 2025-06-02 13:00:42.562707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:00:42.563192 | orchestrator | 2025-06-02 13:00:42.564667 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 13:00:42.571582 | orchestrator | Monday 02 June 2025 13:00:42 +0000 (0:00:00.474) 0:03:45.535 *********** 2025-06-02 13:00:50.793323 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:50.794140 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:50.795349 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:50.796608 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:50.798476 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:50.798713 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:50.799248 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:50.800017 | orchestrator | 2025-06-02 13:00:50.801097 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 13:00:50.802168 | orchestrator | Monday 02 June 2025 13:00:50 +0000 (0:00:08.229) 0:03:53.765 *********** 2025-06-02 13:00:51.402185 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:51.405992 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:51.406078 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:51.406102 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:51.406118 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:51.407018 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:51.407682 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:51.408426 | orchestrator | 2025-06-02 13:00:51.409197 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 13:00:51.409859 | orchestrator | Monday 02 June 2025 13:00:51 +0000 (0:00:00.609) 0:03:54.374 *********** 2025-06-02 13:00:52.490535 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:52.491535 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:52.491793 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:52.492164 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:52.498488 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:52.498531 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:52.498573 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:52.498585 | orchestrator | 2025-06-02 13:00:52.499287 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 13:00:52.500047 | orchestrator | Monday 02 June 2025 13:00:52 +0000 (0:00:01.088) 0:03:55.462 *********** 2025-06-02 13:00:53.548260 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:53.548392 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:53.549129 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:53.550391 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:53.551325 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:53.552037 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:53.552772 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:53.553362 | orchestrator | 2025-06-02 13:00:53.554231 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 13:00:53.554907 | orchestrator | Monday 02 June 2025 13:00:53 +0000 (0:00:01.056) 0:03:56.519 *********** 2025-06-02 13:00:53.655461 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:53.692165 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:53.729099 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:53.770134 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:53.849447 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:53.850213 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:53.851426 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:53.852371 | orchestrator | 2025-06-02 13:00:53.853324 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 13:00:53.854003 | orchestrator | Monday 02 June 2025 13:00:53 +0000 (0:00:00.304) 0:03:56.823 *********** 2025-06-02 13:00:53.972713 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:54.012893 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:54.051122 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:54.091984 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:54.167234 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:54.168234 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:54.169998 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:54.170382 | orchestrator | 2025-06-02 13:00:54.171484 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 13:00:54.172074 | orchestrator | Monday 02 June 2025 13:00:54 +0000 (0:00:00.315) 0:03:57.139 *********** 2025-06-02 13:00:54.273408 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:54.314142 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:54.349500 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:54.384654 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:54.478879 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:54.480014 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:54.481978 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:54.483514 | orchestrator | 2025-06-02 13:00:54.485212 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 13:00:54.486550 | orchestrator | Monday 02 June 2025 13:00:54 +0000 (0:00:00.310) 0:03:57.450 *********** 2025-06-02 13:00:59.984259 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:59.984375 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:59.985317 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:59.985693 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:59.986487 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:59.987362 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:59.987987 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:59.988748 | orchestrator | 2025-06-02 13:00:59.989235 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 13:00:59.989697 | orchestrator | Monday 02 June 2025 13:00:59 +0000 (0:00:05.505) 0:04:02.955 *********** 2025-06-02 13:01:00.388450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:00.388627 | orchestrator | 2025-06-02 13:01:00.389306 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 13:01:00.390014 | orchestrator | Monday 02 June 2025 13:01:00 +0000 (0:00:00.405) 0:04:03.360 *********** 2025-06-02 13:01:00.483971 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.484074 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 13:01:00.484089 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.484165 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 13:01:00.527661 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:00.527756 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.527771 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 13:01:00.589943 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:00.590630 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.592996 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 13:01:00.621167 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:00.621739 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.657882 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 13:01:00.658355 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:00.736615 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:00.737383 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.738353 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 13:01:00.739149 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:00.739775 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 13:01:00.742086 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 13:01:00.743974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:00.746330 | orchestrator | 2025-06-02 13:01:00.746911 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 13:01:00.747543 | orchestrator | Monday 02 June 2025 13:01:00 +0000 (0:00:00.349) 0:04:03.709 *********** 2025-06-02 13:01:01.125304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:01.125472 | orchestrator | 2025-06-02 13:01:01.126993 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 13:01:01.128595 | orchestrator | Monday 02 June 2025 13:01:01 +0000 (0:00:00.386) 0:04:04.096 *********** 2025-06-02 13:01:01.203279 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 13:01:01.246072 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:01.246339 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 13:01:01.304793 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 13:01:01.304996 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:01.305056 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 13:01:01.360305 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:01.360704 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 13:01:01.414292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:01.415005 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 13:01:01.500085 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:01.501419 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:01.502557 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 13:01:01.505769 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:01.505847 | orchestrator | 2025-06-02 13:01:01.505863 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 13:01:01.505955 | orchestrator | Monday 02 June 2025 13:01:01 +0000 (0:00:00.377) 0:04:04.473 *********** 2025-06-02 13:01:02.078230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:02.080164 | orchestrator | 2025-06-02 13:01:02.083727 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 13:01:02.083783 | orchestrator | Monday 02 June 2025 13:01:02 +0000 (0:00:00.576) 0:04:05.049 *********** 2025-06-02 13:01:35.296110 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:35.296272 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:35.296349 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:35.296368 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:35.296384 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:35.296400 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:35.298292 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:35.300279 | orchestrator | 2025-06-02 13:01:35.301043 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 13:01:35.305077 | orchestrator | Monday 02 June 2025 13:01:35 +0000 (0:00:33.210) 0:04:38.260 *********** 2025-06-02 13:01:42.839768 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:42.841068 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:42.844359 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:42.844402 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:42.844770 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:42.845957 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:42.847083 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:42.848212 | orchestrator | 2025-06-02 13:01:42.848282 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 13:01:42.848826 | orchestrator | Monday 02 June 2025 13:01:42 +0000 (0:00:07.550) 0:04:45.811 *********** 2025-06-02 13:01:50.195112 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:50.195258 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:50.195523 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:50.198106 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:50.198468 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:50.200294 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:50.200927 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:50.201809 | orchestrator | 2025-06-02 13:01:50.202326 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 13:01:50.202853 | orchestrator | Monday 02 June 2025 13:01:50 +0000 (0:00:07.356) 0:04:53.167 *********** 2025-06-02 13:01:51.838316 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:51.839434 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:51.839578 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:51.840862 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:51.841951 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:51.843293 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:51.844094 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:51.845240 | orchestrator | 2025-06-02 13:01:51.846144 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 13:01:51.846855 | orchestrator | Monday 02 June 2025 13:01:51 +0000 (0:00:01.642) 0:04:54.810 *********** 2025-06-02 13:01:57.302131 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:57.302904 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:57.304693 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:57.304972 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:57.308063 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:57.310065 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:57.310620 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:57.312047 | orchestrator | 2025-06-02 13:01:57.312692 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 13:01:57.313625 | orchestrator | Monday 02 June 2025 13:01:57 +0000 (0:00:05.463) 0:05:00.273 *********** 2025-06-02 13:01:57.735297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:57.735815 | orchestrator | 2025-06-02 13:01:57.736721 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 13:01:57.737704 | orchestrator | Monday 02 June 2025 13:01:57 +0000 (0:00:00.434) 0:05:00.708 *********** 2025-06-02 13:01:58.459824 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:58.463405 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:58.464355 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:58.464703 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:58.464793 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:58.465494 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:58.466400 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:58.469072 | orchestrator | 2025-06-02 13:01:58.472759 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 13:01:58.473162 | orchestrator | Monday 02 June 2025 13:01:58 +0000 (0:00:00.722) 0:05:01.430 *********** 2025-06-02 13:02:00.005481 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:00.005922 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:00.007706 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:00.007743 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:00.009097 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:00.009815 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:00.011023 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:00.011357 | orchestrator | 2025-06-02 13:02:00.012244 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 13:02:00.013121 | orchestrator | Monday 02 June 2025 13:01:59 +0000 (0:00:01.545) 0:05:02.976 *********** 2025-06-02 13:02:00.737590 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:00.739243 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:00.739493 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:00.741993 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:00.742688 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:00.743821 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:00.744898 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:00.745799 | orchestrator | 2025-06-02 13:02:00.746554 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 13:02:00.747440 | orchestrator | Monday 02 June 2025 13:02:00 +0000 (0:00:00.733) 0:05:03.709 *********** 2025-06-02 13:02:00.830520 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:00.882666 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:00.916216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:00.948806 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:01.004389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:01.004474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:01.005689 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:01.006438 | orchestrator | 2025-06-02 13:02:01.007257 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 13:02:01.008396 | orchestrator | Monday 02 June 2025 13:02:00 +0000 (0:00:00.267) 0:05:03.977 *********** 2025-06-02 13:02:01.068355 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:01.113513 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:01.152107 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:01.184495 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:01.394923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:01.395751 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:01.399361 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:01.399427 | orchestrator | 2025-06-02 13:02:01.399440 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 13:02:01.399450 | orchestrator | Monday 02 June 2025 13:02:01 +0000 (0:00:00.387) 0:05:04.365 *********** 2025-06-02 13:02:01.492729 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:01.545563 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:01.583470 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:01.627590 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:01.661009 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:01.739053 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:01.739229 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:01.740033 | orchestrator | 2025-06-02 13:02:01.740969 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 13:02:01.741744 | orchestrator | Monday 02 June 2025 13:02:01 +0000 (0:00:00.344) 0:05:04.709 *********** 2025-06-02 13:02:01.860604 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:01.906271 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:01.939981 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:01.971293 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:02.024529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:02.024889 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:02.025643 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:02.026407 | orchestrator | 2025-06-02 13:02:02.026834 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 13:02:02.027707 | orchestrator | Monday 02 June 2025 13:02:02 +0000 (0:00:00.289) 0:05:04.998 *********** 2025-06-02 13:02:02.139832 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:02.167960 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:02.246600 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:02.288603 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:02.362847 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:02.363709 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:02.365265 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:02.366654 | orchestrator | 2025-06-02 13:02:02.367895 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 13:02:02.369402 | orchestrator | Monday 02 June 2025 13:02:02 +0000 (0:00:00.336) 0:05:05.335 *********** 2025-06-02 13:02:02.478223 | orchestrator | ok: [testbed-manager] =>  2025-06-02 13:02:02.478389 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.520010 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 13:02:02.520319 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.552024 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 13:02:02.552110 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.589482 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 13:02:02.590409 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.660170 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 13:02:02.661687 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.662289 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 13:02:02.664073 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.664712 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 13:02:02.664983 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 13:02:02.666691 | orchestrator | 2025-06-02 13:02:02.668006 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 13:02:02.671146 | orchestrator | Monday 02 June 2025 13:02:02 +0000 (0:00:00.296) 0:05:05.632 *********** 2025-06-02 13:02:02.788176 | orchestrator | ok: [testbed-manager] =>  2025-06-02 13:02:02.789005 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:02.930007 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 13:02:02.930937 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:02.966602 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 13:02:02.966649 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:03.005183 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 13:02:03.007110 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:03.092120 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 13:02:03.093070 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:03.094250 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 13:02:03.095562 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:03.096089 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 13:02:03.096950 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 13:02:03.097646 | orchestrator | 2025-06-02 13:02:03.098328 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 13:02:03.099444 | orchestrator | Monday 02 June 2025 13:02:03 +0000 (0:00:00.433) 0:05:06.065 *********** 2025-06-02 13:02:03.219094 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:03.249915 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:03.281554 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:03.312244 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:03.366203 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:03.367086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:03.370535 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:03.370606 | orchestrator | 2025-06-02 13:02:03.371334 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 13:02:03.371427 | orchestrator | Monday 02 June 2025 13:02:03 +0000 (0:00:00.275) 0:05:06.340 *********** 2025-06-02 13:02:03.486330 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:03.518732 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:03.555418 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:03.600063 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:03.671200 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:03.671895 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:03.673105 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:03.673951 | orchestrator | 2025-06-02 13:02:03.674178 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 13:02:03.674672 | orchestrator | Monday 02 June 2025 13:02:03 +0000 (0:00:00.303) 0:05:06.644 *********** 2025-06-02 13:02:04.067247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:02:04.067351 | orchestrator | 2025-06-02 13:02:04.067977 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 13:02:04.068518 | orchestrator | Monday 02 June 2025 13:02:04 +0000 (0:00:00.395) 0:05:07.039 *********** 2025-06-02 13:02:04.910243 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:04.910619 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:04.911620 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:04.912725 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:04.913431 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:04.914609 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:04.915691 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:04.916332 | orchestrator | 2025-06-02 13:02:04.917207 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 13:02:04.917544 | orchestrator | Monday 02 June 2025 13:02:04 +0000 (0:00:00.841) 0:05:07.880 *********** 2025-06-02 13:02:07.681424 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:07.681614 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:07.682236 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:07.682567 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:07.682939 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:07.683441 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:07.683871 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:07.684374 | orchestrator | 2025-06-02 13:02:07.684837 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 13:02:07.685466 | orchestrator | Monday 02 June 2025 13:02:07 +0000 (0:00:02.767) 0:05:10.648 *********** 2025-06-02 13:02:07.755354 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 13:02:07.755962 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 13:02:07.839706 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 13:02:07.839913 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 13:02:07.840250 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 13:02:07.840930 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 13:02:07.926644 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:07.927656 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 13:02:07.928039 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 13:02:07.931311 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 13:02:07.993649 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:07.994881 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 13:02:07.997034 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 13:02:07.997318 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 13:02:08.210211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:08.212004 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 13:02:08.215525 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 13:02:08.215550 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 13:02:08.283275 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:08.284017 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 13:02:08.287815 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 13:02:08.288476 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 13:02:08.443684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:08.443964 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:08.444868 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 13:02:08.447420 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 13:02:08.448412 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 13:02:08.450229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:08.451667 | orchestrator | 2025-06-02 13:02:08.451956 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 13:02:08.453644 | orchestrator | Monday 02 June 2025 13:02:08 +0000 (0:00:00.766) 0:05:11.414 *********** 2025-06-02 13:02:14.580570 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:14.581935 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:14.582978 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:14.584017 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:14.585636 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:14.586089 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:14.586802 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:14.587230 | orchestrator | 2025-06-02 13:02:14.589529 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 13:02:14.589577 | orchestrator | Monday 02 June 2025 13:02:14 +0000 (0:00:06.134) 0:05:17.549 *********** 2025-06-02 13:02:15.632004 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:15.632184 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:15.632843 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:15.633993 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:15.635205 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:15.636036 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:15.636607 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:15.637461 | orchestrator | 2025-06-02 13:02:15.638131 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 13:02:15.638590 | orchestrator | Monday 02 June 2025 13:02:15 +0000 (0:00:01.052) 0:05:18.601 *********** 2025-06-02 13:02:23.038553 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:23.039172 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:23.040607 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:23.041302 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:23.041788 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:23.043516 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:23.044510 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:23.044918 | orchestrator | 2025-06-02 13:02:23.045734 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 13:02:23.046258 | orchestrator | Monday 02 June 2025 13:02:23 +0000 (0:00:07.407) 0:05:26.009 *********** 2025-06-02 13:02:26.314172 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:26.314885 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:26.317925 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:26.317985 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:26.318597 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:26.321348 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:26.322357 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:26.323129 | orchestrator | 2025-06-02 13:02:26.324177 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 13:02:26.325112 | orchestrator | Monday 02 June 2025 13:02:26 +0000 (0:00:03.276) 0:05:29.286 *********** 2025-06-02 13:02:27.899183 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:27.899816 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:27.901924 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:27.902203 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:27.902993 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:27.905042 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:27.905073 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:27.905195 | orchestrator | 2025-06-02 13:02:27.906468 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 13:02:27.907461 | orchestrator | Monday 02 June 2025 13:02:27 +0000 (0:00:01.583) 0:05:30.869 *********** 2025-06-02 13:02:29.235983 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:29.236153 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:29.236254 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:29.236865 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:29.236894 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:29.237575 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:29.237660 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:29.237966 | orchestrator | 2025-06-02 13:02:29.238409 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 13:02:29.239187 | orchestrator | Monday 02 June 2025 13:02:29 +0000 (0:00:01.335) 0:05:32.205 *********** 2025-06-02 13:02:29.455180 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:29.522635 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:29.586800 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:29.671789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:29.896844 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:29.897011 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:29.898127 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:29.899735 | orchestrator | 2025-06-02 13:02:29.901217 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 13:02:29.902232 | orchestrator | Monday 02 June 2025 13:02:29 +0000 (0:00:00.662) 0:05:32.867 *********** 2025-06-02 13:02:39.228597 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:39.230766 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:39.230860 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:39.230876 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:39.232395 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:39.233145 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:39.233451 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:39.233845 | orchestrator | 2025-06-02 13:02:39.234251 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 13:02:39.234906 | orchestrator | Monday 02 June 2025 13:02:39 +0000 (0:00:09.328) 0:05:42.195 *********** 2025-06-02 13:02:40.162490 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:40.162677 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:40.162854 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:40.163315 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:40.166321 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:40.166422 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:40.166449 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:40.166479 | orchestrator | 2025-06-02 13:02:40.166503 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 13:02:40.166522 | orchestrator | Monday 02 June 2025 13:02:40 +0000 (0:00:00.939) 0:05:43.135 *********** 2025-06-02 13:02:48.725002 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:48.725188 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:48.727050 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:48.729266 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:48.729879 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:48.730761 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:48.731782 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:48.732307 | orchestrator | 2025-06-02 13:02:48.732721 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 13:02:48.733369 | orchestrator | Monday 02 June 2025 13:02:48 +0000 (0:00:08.559) 0:05:51.695 *********** 2025-06-02 13:02:59.152365 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:59.152560 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:59.153393 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:59.155434 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:59.156343 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:59.158131 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:59.158557 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:59.159015 | orchestrator | 2025-06-02 13:02:59.159408 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 13:02:59.161246 | orchestrator | Monday 02 June 2025 13:02:59 +0000 (0:00:10.425) 0:06:02.121 *********** 2025-06-02 13:02:59.566664 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 13:03:00.329232 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 13:03:00.330460 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 13:03:00.331346 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 13:03:00.332619 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 13:03:00.333863 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 13:03:00.334910 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 13:03:00.335924 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 13:03:00.336920 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 13:03:00.337875 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 13:03:00.338823 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 13:03:00.339553 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 13:03:00.340433 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 13:03:00.341373 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 13:03:00.342268 | orchestrator | 2025-06-02 13:03:00.342789 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 13:03:00.343657 | orchestrator | Monday 02 June 2025 13:03:00 +0000 (0:00:01.177) 0:06:03.298 *********** 2025-06-02 13:03:00.487080 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:00.556536 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:00.635229 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:00.707802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:00.776205 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:00.893880 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:00.894126 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:00.895129 | orchestrator | 2025-06-02 13:03:00.895810 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 13:03:00.896430 | orchestrator | Monday 02 June 2025 13:03:00 +0000 (0:00:00.568) 0:06:03.867 *********** 2025-06-02 13:03:04.626277 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:04.627194 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:04.628689 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:04.630318 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:04.630427 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:04.632635 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:04.633401 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:04.634283 | orchestrator | 2025-06-02 13:03:04.635191 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 13:03:04.636053 | orchestrator | Monday 02 June 2025 13:03:04 +0000 (0:00:03.730) 0:06:07.597 *********** 2025-06-02 13:03:04.756436 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:04.825822 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:04.891087 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:04.954150 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:05.024182 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:05.118260 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:05.118435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:05.119571 | orchestrator | 2025-06-02 13:03:05.121473 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 13:03:05.124857 | orchestrator | Monday 02 June 2025 13:03:05 +0000 (0:00:00.492) 0:06:08.090 *********** 2025-06-02 13:03:05.191063 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 13:03:05.191235 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 13:03:05.263980 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:05.264518 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 13:03:05.265296 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 13:03:05.333295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:05.333985 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 13:03:05.335179 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 13:03:05.402279 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:05.402691 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 13:03:05.404008 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 13:03:05.474661 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:05.475257 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 13:03:05.476445 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 13:03:05.540566 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:05.540775 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 13:03:05.541585 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 13:03:05.636041 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:05.636848 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 13:03:05.637802 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 13:03:05.639875 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:05.641229 | orchestrator | 2025-06-02 13:03:05.642117 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 13:03:05.645303 | orchestrator | Monday 02 June 2025 13:03:05 +0000 (0:00:00.517) 0:06:08.607 *********** 2025-06-02 13:03:05.767923 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:05.831499 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:05.900804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:05.963415 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:06.026171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:06.126171 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:06.129396 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:06.129426 | orchestrator | 2025-06-02 13:03:06.132300 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 13:03:06.132342 | orchestrator | Monday 02 June 2025 13:03:06 +0000 (0:00:00.490) 0:06:09.097 *********** 2025-06-02 13:03:06.248948 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:06.318268 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:06.378990 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:06.443073 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:06.511021 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:06.644610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:06.644878 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:06.645601 | orchestrator | 2025-06-02 13:03:06.646330 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 13:03:06.646813 | orchestrator | Monday 02 June 2025 13:03:06 +0000 (0:00:00.518) 0:06:09.616 *********** 2025-06-02 13:03:06.778834 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:06.839731 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:06.901932 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:07.136571 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:07.202675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:07.322604 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:07.323310 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:07.324246 | orchestrator | 2025-06-02 13:03:07.325258 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 13:03:07.328391 | orchestrator | Monday 02 June 2025 13:03:07 +0000 (0:00:00.678) 0:06:10.295 *********** 2025-06-02 13:03:08.969246 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:08.969845 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:08.970919 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:08.972573 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:08.974260 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:08.974677 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:08.976256 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:08.978651 | orchestrator | 2025-06-02 13:03:08.979438 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 13:03:08.980131 | orchestrator | Monday 02 June 2025 13:03:08 +0000 (0:00:01.644) 0:06:11.939 *********** 2025-06-02 13:03:09.792918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:09.795588 | orchestrator | 2025-06-02 13:03:09.797342 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 13:03:09.798209 | orchestrator | Monday 02 June 2025 13:03:09 +0000 (0:00:00.825) 0:06:12.764 *********** 2025-06-02 13:03:10.226334 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:10.654601 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:10.655877 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:10.656825 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:10.657730 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:10.659749 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:10.660577 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:10.661747 | orchestrator | 2025-06-02 13:03:10.662798 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 13:03:10.663706 | orchestrator | Monday 02 June 2025 13:03:10 +0000 (0:00:00.861) 0:06:13.625 *********** 2025-06-02 13:03:11.151431 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:11.232040 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:11.309069 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:11.795171 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:11.795332 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:11.795660 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:11.798082 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:11.798880 | orchestrator | 2025-06-02 13:03:11.799932 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 13:03:11.801385 | orchestrator | Monday 02 June 2025 13:03:11 +0000 (0:00:01.139) 0:06:14.765 *********** 2025-06-02 13:03:13.243082 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:13.244036 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:13.244863 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:13.246598 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:13.247002 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:13.248917 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:13.250827 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:13.252193 | orchestrator | 2025-06-02 13:03:13.252839 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 13:03:13.254081 | orchestrator | Monday 02 June 2025 13:03:13 +0000 (0:00:01.447) 0:06:16.212 *********** 2025-06-02 13:03:13.382281 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:14.683622 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:14.684213 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:14.687679 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:14.688299 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:14.689798 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:14.690169 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:14.691113 | orchestrator | 2025-06-02 13:03:14.692209 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 13:03:14.696067 | orchestrator | Monday 02 June 2025 13:03:14 +0000 (0:00:01.443) 0:06:17.655 *********** 2025-06-02 13:03:16.049086 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:16.049198 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:16.050580 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:16.052000 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:16.053340 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:16.054093 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:16.054676 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:16.055426 | orchestrator | 2025-06-02 13:03:16.055982 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 13:03:16.057063 | orchestrator | Monday 02 June 2025 13:03:16 +0000 (0:00:01.362) 0:06:19.018 *********** 2025-06-02 13:03:17.378936 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:17.379109 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:17.380213 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:17.381860 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:17.382823 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:17.384099 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:17.384750 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:17.385667 | orchestrator | 2025-06-02 13:03:17.386412 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 13:03:17.386924 | orchestrator | Monday 02 June 2025 13:03:17 +0000 (0:00:01.330) 0:06:20.348 *********** 2025-06-02 13:03:18.608255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:18.609187 | orchestrator | 2025-06-02 13:03:18.611388 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 13:03:18.611417 | orchestrator | Monday 02 June 2025 13:03:18 +0000 (0:00:01.232) 0:06:21.580 *********** 2025-06-02 13:03:20.061512 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:20.062645 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:20.064098 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:20.065261 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:20.065791 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:20.066848 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:20.067183 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:20.068119 | orchestrator | 2025-06-02 13:03:20.068492 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 13:03:20.069329 | orchestrator | Monday 02 June 2025 13:03:20 +0000 (0:00:01.451) 0:06:23.032 *********** 2025-06-02 13:03:21.207224 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:21.207359 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:21.207374 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:21.207802 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:21.208838 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:21.209846 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:21.211805 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:21.212515 | orchestrator | 2025-06-02 13:03:21.213170 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 13:03:21.213738 | orchestrator | Monday 02 June 2025 13:03:21 +0000 (0:00:01.140) 0:06:24.172 *********** 2025-06-02 13:03:22.557265 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:22.557396 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:22.557528 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:22.558292 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:22.558350 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:22.558504 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:22.561266 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:22.562344 | orchestrator | 2025-06-02 13:03:22.563212 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 13:03:22.564386 | orchestrator | Monday 02 June 2025 13:03:22 +0000 (0:00:01.354) 0:06:25.527 *********** 2025-06-02 13:03:23.690611 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:23.691761 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:23.692184 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:23.692462 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:23.692715 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:23.693452 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:23.694243 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:23.694631 | orchestrator | 2025-06-02 13:03:23.696134 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 13:03:23.696522 | orchestrator | Monday 02 June 2025 13:03:23 +0000 (0:00:01.134) 0:06:26.662 *********** 2025-06-02 13:03:24.914364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:24.914594 | orchestrator | 2025-06-02 13:03:24.915406 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.916221 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.909) 0:06:27.571 *********** 2025-06-02 13:03:24.916892 | orchestrator | 2025-06-02 13:03:24.919173 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.919198 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.045) 0:06:27.616 *********** 2025-06-02 13:03:24.919807 | orchestrator | 2025-06-02 13:03:24.920834 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.921970 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.040) 0:06:27.656 *********** 2025-06-02 13:03:24.922747 | orchestrator | 2025-06-02 13:03:24.923195 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.923498 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.060) 0:06:27.717 *********** 2025-06-02 13:03:24.924390 | orchestrator | 2025-06-02 13:03:24.924796 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.925361 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.039) 0:06:27.757 *********** 2025-06-02 13:03:24.925770 | orchestrator | 2025-06-02 13:03:24.926452 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.926795 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.040) 0:06:27.797 *********** 2025-06-02 13:03:24.927214 | orchestrator | 2025-06-02 13:03:24.927534 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:03:24.927956 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.046) 0:06:27.844 *********** 2025-06-02 13:03:24.928356 | orchestrator | 2025-06-02 13:03:24.928692 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 13:03:24.929172 | orchestrator | Monday 02 June 2025 13:03:24 +0000 (0:00:00.039) 0:06:27.883 *********** 2025-06-02 13:03:26.314010 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:26.314375 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:26.315652 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:26.317089 | orchestrator | 2025-06-02 13:03:26.318243 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 13:03:26.318892 | orchestrator | Monday 02 June 2025 13:03:26 +0000 (0:00:01.393) 0:06:29.277 *********** 2025-06-02 13:03:27.690952 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:27.692415 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:27.694737 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:27.695926 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:27.696950 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:27.697890 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:27.698791 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:27.699716 | orchestrator | 2025-06-02 13:03:27.700303 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 13:03:27.701045 | orchestrator | Monday 02 June 2025 13:03:27 +0000 (0:00:01.381) 0:06:30.659 *********** 2025-06-02 13:03:28.837727 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:28.839031 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:28.840078 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:28.840913 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:28.842891 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:28.843450 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:28.843826 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:28.844828 | orchestrator | 2025-06-02 13:03:28.844857 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 13:03:28.845506 | orchestrator | Monday 02 June 2025 13:03:28 +0000 (0:00:01.149) 0:06:31.809 *********** 2025-06-02 13:03:28.978574 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:31.096997 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:31.097383 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:31.104401 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:31.104812 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:31.105200 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:31.105737 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:31.106100 | orchestrator | 2025-06-02 13:03:31.106639 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 13:03:31.106995 | orchestrator | Monday 02 June 2025 13:03:31 +0000 (0:00:02.256) 0:06:34.065 *********** 2025-06-02 13:03:31.229744 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:31.229844 | orchestrator | 2025-06-02 13:03:31.235067 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 13:03:31.235096 | orchestrator | Monday 02 June 2025 13:03:31 +0000 (0:00:00.137) 0:06:34.202 *********** 2025-06-02 13:03:32.244816 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:32.245731 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:32.246538 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:32.248040 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:32.249915 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:32.250856 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:32.251981 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:32.252898 | orchestrator | 2025-06-02 13:03:32.253843 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 13:03:32.255010 | orchestrator | Monday 02 June 2025 13:03:32 +0000 (0:00:01.013) 0:06:35.215 *********** 2025-06-02 13:03:32.617894 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:32.686964 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:32.776115 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:32.848028 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:32.994211 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:32.995918 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:32.997392 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:32.998367 | orchestrator | 2025-06-02 13:03:32.999357 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 13:03:33.000155 | orchestrator | Monday 02 June 2025 13:03:32 +0000 (0:00:00.749) 0:06:35.965 *********** 2025-06-02 13:03:33.888668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:33.891621 | orchestrator | 2025-06-02 13:03:33.891685 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 13:03:33.892032 | orchestrator | Monday 02 June 2025 13:03:33 +0000 (0:00:00.894) 0:06:36.859 *********** 2025-06-02 13:03:34.360159 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:34.789512 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:34.789620 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:34.790519 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:34.791357 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:34.791862 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:34.792675 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:34.792907 | orchestrator | 2025-06-02 13:03:34.793346 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 13:03:34.793798 | orchestrator | Monday 02 June 2025 13:03:34 +0000 (0:00:00.901) 0:06:37.761 *********** 2025-06-02 13:03:37.422549 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 13:03:37.422854 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 13:03:37.424533 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 13:03:37.427054 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 13:03:37.429059 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 13:03:37.429530 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 13:03:37.430484 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 13:03:37.431656 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 13:03:37.431994 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 13:03:37.432468 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 13:03:37.433204 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 13:03:37.433712 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 13:03:37.434427 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 13:03:37.435059 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 13:03:37.435541 | orchestrator | 2025-06-02 13:03:37.436598 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 13:03:37.436819 | orchestrator | Monday 02 June 2025 13:03:37 +0000 (0:00:02.632) 0:06:40.393 *********** 2025-06-02 13:03:37.555388 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:37.618916 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:37.683458 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:37.754918 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:37.818739 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:37.912066 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:37.912827 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:37.913820 | orchestrator | 2025-06-02 13:03:37.914984 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 13:03:37.918670 | orchestrator | Monday 02 June 2025 13:03:37 +0000 (0:00:00.490) 0:06:40.884 *********** 2025-06-02 13:03:38.743060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:38.744211 | orchestrator | 2025-06-02 13:03:38.744274 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 13:03:38.744336 | orchestrator | Monday 02 June 2025 13:03:38 +0000 (0:00:00.829) 0:06:41.713 *********** 2025-06-02 13:03:39.402934 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:39.827768 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:39.831126 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:39.831162 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:39.831666 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:39.832672 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:39.833568 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:39.834230 | orchestrator | 2025-06-02 13:03:39.834860 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 13:03:39.836530 | orchestrator | Monday 02 June 2025 13:03:39 +0000 (0:00:01.084) 0:06:42.798 *********** 2025-06-02 13:03:40.295121 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:40.703277 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:40.704429 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:40.705452 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:40.706535 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:40.707437 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:40.708048 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:40.709542 | orchestrator | 2025-06-02 13:03:40.710181 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 13:03:40.710523 | orchestrator | Monday 02 June 2025 13:03:40 +0000 (0:00:00.877) 0:06:43.675 *********** 2025-06-02 13:03:40.831369 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:40.914840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:40.982255 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:41.049114 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:41.117543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:41.212046 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:41.212369 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:41.213495 | orchestrator | 2025-06-02 13:03:41.214204 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 13:03:41.218065 | orchestrator | Monday 02 June 2025 13:03:41 +0000 (0:00:00.506) 0:06:44.182 *********** 2025-06-02 13:03:42.824572 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:42.824815 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:42.825645 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:42.826514 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:42.827216 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:42.828900 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:42.829528 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:42.830201 | orchestrator | 2025-06-02 13:03:42.831828 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 13:03:42.832497 | orchestrator | Monday 02 June 2025 13:03:42 +0000 (0:00:01.611) 0:06:45.794 *********** 2025-06-02 13:03:42.955477 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:43.019099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:43.085506 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:43.147099 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:43.214621 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:43.319765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:43.321113 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:43.321658 | orchestrator | 2025-06-02 13:03:43.323477 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 13:03:43.324365 | orchestrator | Monday 02 June 2025 13:03:43 +0000 (0:00:00.499) 0:06:46.293 *********** 2025-06-02 13:03:51.108110 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:51.108295 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:51.110668 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:51.110812 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:51.111916 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:51.112922 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:51.113801 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:51.114553 | orchestrator | 2025-06-02 13:03:51.115385 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 13:03:51.116047 | orchestrator | Monday 02 June 2025 13:03:51 +0000 (0:00:07.785) 0:06:54.078 *********** 2025-06-02 13:03:52.460931 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:52.461036 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:52.461474 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:52.462595 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:52.463465 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:52.464655 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:52.465894 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:52.466087 | orchestrator | 2025-06-02 13:03:52.466944 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 13:03:52.467598 | orchestrator | Monday 02 June 2025 13:03:52 +0000 (0:00:01.353) 0:06:55.432 *********** 2025-06-02 13:03:54.221510 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:54.223743 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:54.228280 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:54.230301 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:54.232287 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:54.232721 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:54.235595 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:54.235611 | orchestrator | 2025-06-02 13:03:54.235619 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 13:03:54.235627 | orchestrator | Monday 02 June 2025 13:03:54 +0000 (0:00:01.761) 0:06:57.193 *********** 2025-06-02 13:03:55.894923 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:55.895039 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:55.898472 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:55.898509 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:55.898522 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:55.898742 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:55.899871 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:55.900877 | orchestrator | 2025-06-02 13:03:55.901375 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:03:55.902360 | orchestrator | Monday 02 June 2025 13:03:55 +0000 (0:00:01.671) 0:06:58.864 *********** 2025-06-02 13:03:56.386167 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:56.943233 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:56.943364 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:56.943389 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:56.943957 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:56.944399 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:56.945192 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:56.945961 | orchestrator | 2025-06-02 13:03:56.946221 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:03:56.947060 | orchestrator | Monday 02 June 2025 13:03:56 +0000 (0:00:01.050) 0:06:59.914 *********** 2025-06-02 13:03:57.099013 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:57.168994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:57.232217 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:57.294842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:57.363243 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:57.757585 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:57.758695 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:57.759898 | orchestrator | 2025-06-02 13:03:57.761109 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 13:03:57.762566 | orchestrator | Monday 02 June 2025 13:03:57 +0000 (0:00:00.814) 0:07:00.729 *********** 2025-06-02 13:03:57.895494 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:57.956644 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:58.027986 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:58.089922 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:58.149972 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:58.253973 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:58.254858 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:58.255806 | orchestrator | 2025-06-02 13:03:58.256968 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 13:03:58.257986 | orchestrator | Monday 02 June 2025 13:03:58 +0000 (0:00:00.497) 0:07:01.227 *********** 2025-06-02 13:03:58.378441 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:58.446998 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:58.512437 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:58.571879 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:58.804958 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:58.913338 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:58.914724 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:58.915454 | orchestrator | 2025-06-02 13:03:58.918561 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 13:03:58.919773 | orchestrator | Monday 02 June 2025 13:03:58 +0000 (0:00:00.656) 0:07:01.884 *********** 2025-06-02 13:03:59.052418 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:59.117784 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:59.181149 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:59.249568 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:59.313446 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:59.420917 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:59.421521 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:59.422874 | orchestrator | 2025-06-02 13:03:59.423925 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 13:03:59.424969 | orchestrator | Monday 02 June 2025 13:03:59 +0000 (0:00:00.508) 0:07:02.392 *********** 2025-06-02 13:03:59.566312 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:59.629467 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:59.702348 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:59.768622 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:59.833368 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:59.944147 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:59.944750 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:59.945874 | orchestrator | 2025-06-02 13:03:59.946816 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 13:03:59.948171 | orchestrator | Monday 02 June 2025 13:03:59 +0000 (0:00:00.521) 0:07:02.914 *********** 2025-06-02 13:04:05.450878 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:05.450999 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:05.451015 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:05.452154 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:05.453568 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:05.453833 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:05.454537 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:05.455240 | orchestrator | 2025-06-02 13:04:05.458494 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 13:04:05.458534 | orchestrator | Monday 02 June 2025 13:04:05 +0000 (0:00:05.508) 0:07:08.422 *********** 2025-06-02 13:04:05.654534 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:05.733105 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:05.796338 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:05.863258 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:05.980642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:05.980986 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:05.981925 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:05.982823 | orchestrator | 2025-06-02 13:04:05.984030 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 13:04:05.984636 | orchestrator | Monday 02 June 2025 13:04:05 +0000 (0:00:00.529) 0:07:08.952 *********** 2025-06-02 13:04:06.986691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:06.987708 | orchestrator | 2025-06-02 13:04:06.988526 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 13:04:06.989308 | orchestrator | Monday 02 June 2025 13:04:06 +0000 (0:00:01.005) 0:07:09.957 *********** 2025-06-02 13:04:08.743264 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:08.743829 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:08.745185 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:08.746190 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:08.746628 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:08.748662 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:08.749327 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:08.750270 | orchestrator | 2025-06-02 13:04:08.750772 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 13:04:08.752118 | orchestrator | Monday 02 June 2025 13:04:08 +0000 (0:00:01.756) 0:07:11.713 *********** 2025-06-02 13:04:09.937847 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:09.938014 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:09.938709 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:09.939661 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:09.940330 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:09.941731 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:09.942349 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:09.943269 | orchestrator | 2025-06-02 13:04:09.944236 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 13:04:09.944857 | orchestrator | Monday 02 June 2025 13:04:09 +0000 (0:00:01.194) 0:07:12.908 *********** 2025-06-02 13:04:10.555627 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:10.983422 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:10.983977 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:10.985055 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:10.986559 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:10.988279 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:10.989063 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:10.989877 | orchestrator | 2025-06-02 13:04:10.990576 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 13:04:10.991948 | orchestrator | Monday 02 June 2025 13:04:10 +0000 (0:00:01.045) 0:07:13.954 *********** 2025-06-02 13:04:12.621464 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.623328 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.623530 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.623552 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.624593 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.625434 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.626265 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:12.627000 | orchestrator | 2025-06-02 13:04:12.628275 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 13:04:12.629226 | orchestrator | Monday 02 June 2025 13:04:12 +0000 (0:00:01.634) 0:07:15.589 *********** 2025-06-02 13:04:13.421112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:13.421250 | orchestrator | 2025-06-02 13:04:13.421702 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 13:04:13.422621 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.801) 0:07:16.391 *********** 2025-06-02 13:04:21.850954 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:21.851169 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:21.851828 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:21.852504 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:21.853404 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:21.858239 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:21.859359 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:21.860293 | orchestrator | 2025-06-02 13:04:21.860317 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 13:04:21.861990 | orchestrator | Monday 02 June 2025 13:04:21 +0000 (0:00:08.429) 0:07:24.820 *********** 2025-06-02 13:04:23.586938 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:23.587074 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:23.587774 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:23.588443 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:23.589147 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:23.590442 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:23.591356 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:23.592231 | orchestrator | 2025-06-02 13:04:23.593475 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 13:04:23.593810 | orchestrator | Monday 02 June 2025 13:04:23 +0000 (0:00:01.737) 0:07:26.558 *********** 2025-06-02 13:04:24.884955 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:24.885096 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:24.885746 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:24.886533 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:24.887485 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:24.887806 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:24.888102 | orchestrator | 2025-06-02 13:04:24.888483 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 13:04:24.891137 | orchestrator | Monday 02 June 2025 13:04:24 +0000 (0:00:01.297) 0:07:27.856 *********** 2025-06-02 13:04:26.331515 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:26.331702 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:26.333188 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:26.333210 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:26.333223 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:26.333235 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:26.333591 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:26.334421 | orchestrator | 2025-06-02 13:04:26.335024 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 13:04:26.337047 | orchestrator | 2025-06-02 13:04:26.337071 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 13:04:26.337084 | orchestrator | Monday 02 June 2025 13:04:26 +0000 (0:00:01.448) 0:07:29.304 *********** 2025-06-02 13:04:26.515128 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:26.577234 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:26.640288 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:26.706727 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:26.765437 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:26.879798 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:26.881311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:26.882838 | orchestrator | 2025-06-02 13:04:26.883711 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 13:04:26.884829 | orchestrator | 2025-06-02 13:04:26.886552 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 13:04:26.886806 | orchestrator | Monday 02 June 2025 13:04:26 +0000 (0:00:00.547) 0:07:29.852 *********** 2025-06-02 13:04:28.210790 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:28.212417 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:28.212567 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:28.214535 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:28.216301 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:28.217273 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:28.218586 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:28.219856 | orchestrator | 2025-06-02 13:04:28.221002 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 13:04:28.221748 | orchestrator | Monday 02 June 2025 13:04:28 +0000 (0:00:01.330) 0:07:31.182 *********** 2025-06-02 13:04:29.659231 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:29.661272 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:29.662414 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:29.663553 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:29.665076 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:29.665884 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:29.666778 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:29.667481 | orchestrator | 2025-06-02 13:04:29.668441 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 13:04:29.669206 | orchestrator | Monday 02 June 2025 13:04:29 +0000 (0:00:01.445) 0:07:32.627 *********** 2025-06-02 13:04:29.989620 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:30.053009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:30.122872 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:30.187505 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:30.247731 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:30.633121 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:30.634752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:30.638297 | orchestrator | 2025-06-02 13:04:30.638339 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 13:04:30.638353 | orchestrator | Monday 02 June 2025 13:04:30 +0000 (0:00:00.977) 0:07:33.605 *********** 2025-06-02 13:04:31.883080 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:31.884926 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:31.884952 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:31.895112 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:31.895144 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:31.895156 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:31.895168 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:31.895180 | orchestrator | 2025-06-02 13:04:31.895193 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 13:04:31.895247 | orchestrator | 2025-06-02 13:04:31.895414 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 13:04:31.896476 | orchestrator | Monday 02 June 2025 13:04:31 +0000 (0:00:01.248) 0:07:34.853 *********** 2025-06-02 13:04:32.863616 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:32.866076 | orchestrator | 2025-06-02 13:04:32.868756 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 13:04:32.868768 | orchestrator | Monday 02 June 2025 13:04:32 +0000 (0:00:00.980) 0:07:35.834 *********** 2025-06-02 13:04:33.367026 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:33.820115 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:33.821179 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:33.822640 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:33.823673 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:33.824224 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:33.824929 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:33.825836 | orchestrator | 2025-06-02 13:04:33.826115 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 13:04:33.826722 | orchestrator | Monday 02 June 2025 13:04:33 +0000 (0:00:00.955) 0:07:36.790 *********** 2025-06-02 13:04:34.990766 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:34.991185 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:34.995680 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:34.997577 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:34.998112 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:34.998138 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:34.998907 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:34.999713 | orchestrator | 2025-06-02 13:04:35.000503 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 13:04:35.001182 | orchestrator | Monday 02 June 2025 13:04:34 +0000 (0:00:01.171) 0:07:37.961 *********** 2025-06-02 13:04:35.986995 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:35.987232 | orchestrator | 2025-06-02 13:04:35.987809 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 13:04:35.988610 | orchestrator | Monday 02 June 2025 13:04:35 +0000 (0:00:00.995) 0:07:38.957 *********** 2025-06-02 13:04:36.383298 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:36.816263 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:36.817573 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:36.818745 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:36.820238 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:36.821609 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:36.824609 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:36.825321 | orchestrator | 2025-06-02 13:04:36.826109 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 13:04:36.827335 | orchestrator | Monday 02 June 2025 13:04:36 +0000 (0:00:00.827) 0:07:39.785 *********** 2025-06-02 13:04:37.910139 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:37.910239 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:37.910311 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:37.911370 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:37.912666 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:37.913295 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:37.913978 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:37.915450 | orchestrator | 2025-06-02 13:04:37.916023 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:04:37.916338 | orchestrator | 2025-06-02 13:04:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:04:37.916445 | orchestrator | 2025-06-02 13:04:37 | INFO  | Please wait and do not abort execution. 2025-06-02 13:04:37.917845 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 13:04:37.918626 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:04:37.919247 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:04:37.919744 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:04:37.920603 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 13:04:37.921042 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:04:37.921577 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:04:37.921989 | orchestrator | 2025-06-02 13:04:37.922407 | orchestrator | 2025-06-02 13:04:37.922772 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:04:37.923243 | orchestrator | Monday 02 June 2025 13:04:37 +0000 (0:00:01.095) 0:07:40.880 *********** 2025-06-02 13:04:37.923622 | orchestrator | =============================================================================== 2025-06-02 13:04:37.923962 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.52s 2025-06-02 13:04:37.924459 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.77s 2025-06-02 13:04:37.924733 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.21s 2025-06-02 13:04:37.925190 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.64s 2025-06-02 13:04:37.925536 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.88s 2025-06-02 13:04:37.925860 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.57s 2025-06-02 13:04:37.926237 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.43s 2025-06-02 13:04:37.926520 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.33s 2025-06-02 13:04:37.926873 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.56s 2025-06-02 13:04:37.928206 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.43s 2025-06-02 13:04:37.928234 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.23s 2025-06-02 13:04:37.928245 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.87s 2025-06-02 13:04:37.929043 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.79s 2025-06-02 13:04:37.929298 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.55s 2025-06-02 13:04:37.929600 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.41s 2025-06-02 13:04:37.929899 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.36s 2025-06-02 13:04:37.930312 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.14s 2025-06-02 13:04:37.930577 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.70s 2025-06-02 13:04:37.930857 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.51s 2025-06-02 13:04:37.931162 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.51s 2025-06-02 13:04:38.624853 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 13:04:38.624974 | orchestrator | + osism apply network 2025-06-02 13:04:40.757583 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:04:40.757742 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:04:40.757759 | orchestrator | Registering Redlock._release_script 2025-06-02 13:04:40.823151 | orchestrator | 2025-06-02 13:04:40 | INFO  | Task 2a0f36e6-ee6c-4228-9867-a2d266b5c014 (network) was prepared for execution. 2025-06-02 13:04:40.823214 | orchestrator | 2025-06-02 13:04:40 | INFO  | It takes a moment until task 2a0f36e6-ee6c-4228-9867-a2d266b5c014 (network) has been started and output is visible here. 2025-06-02 13:04:45.089965 | orchestrator | 2025-06-02 13:04:45.092665 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 13:04:45.095292 | orchestrator | 2025-06-02 13:04:45.096155 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 13:04:45.097226 | orchestrator | Monday 02 June 2025 13:04:45 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-02 13:04:45.240077 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:45.316191 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:45.391139 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:45.468265 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:45.650951 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:45.781098 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:45.781831 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:45.782434 | orchestrator | 2025-06-02 13:04:45.786731 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 13:04:45.787460 | orchestrator | Monday 02 June 2025 13:04:45 +0000 (0:00:00.691) 0:00:00.963 *********** 2025-06-02 13:04:46.954262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:04:46.956008 | orchestrator | 2025-06-02 13:04:46.959733 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 13:04:46.960470 | orchestrator | Monday 02 June 2025 13:04:46 +0000 (0:00:01.172) 0:00:02.136 *********** 2025-06-02 13:04:48.938263 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:48.938372 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:48.939813 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:48.940620 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:48.943733 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:48.943878 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:48.944935 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:48.945982 | orchestrator | 2025-06-02 13:04:48.946606 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 13:04:48.948522 | orchestrator | Monday 02 June 2025 13:04:48 +0000 (0:00:01.985) 0:00:04.122 *********** 2025-06-02 13:04:50.658922 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:50.659129 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:50.660464 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:50.662615 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:50.667608 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:50.668314 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:50.669767 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:50.670563 | orchestrator | 2025-06-02 13:04:50.671722 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 13:04:50.672838 | orchestrator | Monday 02 June 2025 13:04:50 +0000 (0:00:01.716) 0:00:05.838 *********** 2025-06-02 13:04:51.181575 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 13:04:51.652005 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 13:04:51.653934 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 13:04:51.655518 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 13:04:51.656744 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 13:04:51.662105 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 13:04:51.662133 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 13:04:51.662146 | orchestrator | 2025-06-02 13:04:51.662159 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 13:04:51.662209 | orchestrator | Monday 02 June 2025 13:04:51 +0000 (0:00:00.998) 0:00:06.837 *********** 2025-06-02 13:04:54.998269 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:04:55.000115 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:04:55.000559 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:04:55.001700 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:04:55.004422 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:04:55.004455 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:04:55.004467 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:04:55.004942 | orchestrator | 2025-06-02 13:04:55.005922 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 13:04:55.006543 | orchestrator | Monday 02 June 2025 13:04:54 +0000 (0:00:03.341) 0:00:10.179 *********** 2025-06-02 13:04:56.468386 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:56.468771 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:56.469793 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:56.471509 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:56.472607 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:56.475379 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:56.476111 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:56.477738 | orchestrator | 2025-06-02 13:04:56.478953 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 13:04:56.479409 | orchestrator | Monday 02 June 2025 13:04:56 +0000 (0:00:01.474) 0:00:11.653 *********** 2025-06-02 13:04:58.311547 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:04:58.311907 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:04:58.312612 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:04:58.313284 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:04:58.313667 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:04:58.314094 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:04:58.314891 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:04:58.315028 | orchestrator | 2025-06-02 13:04:58.315696 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 13:04:58.316125 | orchestrator | Monday 02 June 2025 13:04:58 +0000 (0:00:01.843) 0:00:13.496 *********** 2025-06-02 13:04:58.734754 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:59.010324 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:59.433070 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:59.434333 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:59.436162 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:59.437339 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:59.438278 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:59.439303 | orchestrator | 2025-06-02 13:04:59.440924 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 13:04:59.441185 | orchestrator | Monday 02 June 2025 13:04:59 +0000 (0:00:01.115) 0:00:14.612 *********** 2025-06-02 13:04:59.594756 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:59.680091 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:59.765749 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:59.848528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:59.932210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:00.079726 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:00.081449 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:00.082602 | orchestrator | 2025-06-02 13:05:00.084110 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 13:05:00.085229 | orchestrator | Monday 02 June 2025 13:05:00 +0000 (0:00:00.652) 0:00:15.265 *********** 2025-06-02 13:05:02.283912 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:02.285109 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:02.286328 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:02.287759 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:02.289788 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:02.290694 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:02.291400 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:02.292521 | orchestrator | 2025-06-02 13:05:02.293217 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 13:05:02.295533 | orchestrator | Monday 02 June 2025 13:05:02 +0000 (0:00:02.199) 0:00:17.464 *********** 2025-06-02 13:05:02.581254 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:02.662245 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:02.748712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:02.832086 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:03.184843 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:03.185381 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:03.186883 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 13:05:03.186914 | orchestrator | 2025-06-02 13:05:03.186930 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 13:05:03.187903 | orchestrator | Monday 02 June 2025 13:05:03 +0000 (0:00:00.903) 0:00:18.368 *********** 2025-06-02 13:05:04.895990 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:04.896850 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:04.897897 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:04.901718 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:04.902269 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:04.903339 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:04.904609 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:04.905906 | orchestrator | 2025-06-02 13:05:04.906829 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 13:05:04.911223 | orchestrator | Monday 02 June 2025 13:05:04 +0000 (0:00:01.708) 0:00:20.077 *********** 2025-06-02 13:05:06.182487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:05:06.182719 | orchestrator | 2025-06-02 13:05:06.184237 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 13:05:06.185749 | orchestrator | Monday 02 June 2025 13:05:06 +0000 (0:00:01.285) 0:00:21.362 *********** 2025-06-02 13:05:06.719919 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:07.313138 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:07.313307 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:07.315342 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:07.316040 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:07.320417 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:07.321884 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:07.323272 | orchestrator | 2025-06-02 13:05:07.323556 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 13:05:07.324565 | orchestrator | Monday 02 June 2025 13:05:07 +0000 (0:00:01.132) 0:00:22.495 *********** 2025-06-02 13:05:07.483000 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:07.564618 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:07.651933 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:07.737317 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:07.820292 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:07.965292 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:07.966758 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:07.967714 | orchestrator | 2025-06-02 13:05:07.969303 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 13:05:07.971163 | orchestrator | Monday 02 June 2025 13:05:07 +0000 (0:00:00.652) 0:00:23.147 *********** 2025-06-02 13:05:08.381261 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:08.383219 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:08.683121 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:08.684088 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:08.684394 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:08.685198 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:08.685870 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:08.686587 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:09.166797 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:09.169617 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:09.169722 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:09.170111 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:09.171113 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:09.172221 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:09.173281 | orchestrator | 2025-06-02 13:05:09.174109 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 13:05:09.174932 | orchestrator | Monday 02 June 2025 13:05:09 +0000 (0:00:01.200) 0:00:24.347 *********** 2025-06-02 13:05:09.332745 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:09.415550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:09.498251 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:09.576586 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:09.654007 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:09.779432 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:09.779774 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:09.783530 | orchestrator | 2025-06-02 13:05:09.783563 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 13:05:09.783578 | orchestrator | Monday 02 June 2025 13:05:09 +0000 (0:00:00.617) 0:00:24.964 *********** 2025-06-02 13:05:13.481164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:05:13.483208 | orchestrator | 2025-06-02 13:05:13.487820 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 13:05:13.489905 | orchestrator | Monday 02 June 2025 13:05:13 +0000 (0:00:03.696) 0:00:28.661 *********** 2025-06-02 13:05:18.286880 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.287438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.288975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.291965 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.293212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.293554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.294713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.295472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.297701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:18.298097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.298798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.299525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.300093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.300778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:18.302113 | orchestrator | 2025-06-02 13:05:18.302938 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 13:05:18.303928 | orchestrator | Monday 02 June 2025 13:05:18 +0000 (0:00:04.804) 0:00:33.466 *********** 2025-06-02 13:05:23.054907 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.054961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.057332 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.058186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.060277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.061793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.062300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.063329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.064111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:05:23.065198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.066366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.067396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.068413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.069413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:05:23.070239 | orchestrator | 2025-06-02 13:05:23.070802 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 13:05:23.071452 | orchestrator | Monday 02 June 2025 13:05:23 +0000 (0:00:04.770) 0:00:38.236 *********** 2025-06-02 13:05:24.314397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:05:24.314872 | orchestrator | 2025-06-02 13:05:24.315373 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 13:05:24.316819 | orchestrator | Monday 02 June 2025 13:05:24 +0000 (0:00:01.257) 0:00:39.494 *********** 2025-06-02 13:05:24.802355 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:25.077271 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:25.523559 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:25.523740 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:25.524387 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:25.525122 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:25.525912 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:25.526744 | orchestrator | 2025-06-02 13:05:25.527311 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 13:05:25.528031 | orchestrator | Monday 02 June 2025 13:05:25 +0000 (0:00:01.215) 0:00:40.709 *********** 2025-06-02 13:05:25.624596 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:25.624831 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:25.625067 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:25.626432 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:25.721651 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:25.721745 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:25.723773 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:25.724770 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:25.821311 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:25.824021 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:25.825279 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:25.826559 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:25.828316 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:25.940970 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:25.941913 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:25.943272 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:25.944925 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:25.946538 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:26.041022 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:26.042135 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:26.043963 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:26.045334 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:26.046715 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:26.312760 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:26.314278 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:26.315939 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:26.317899 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:26.319237 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:27.544411 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:27.545704 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:27.546855 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:05:27.548785 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:05:27.550501 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:05:27.550893 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:05:27.552476 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:27.553334 | orchestrator | 2025-06-02 13:05:27.554215 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 13:05:27.555009 | orchestrator | Monday 02 June 2025 13:05:27 +0000 (0:00:02.016) 0:00:42.726 *********** 2025-06-02 13:05:27.701489 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:27.783064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:27.860659 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:27.942906 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:28.025861 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:28.157060 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:28.158349 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:28.159680 | orchestrator | 2025-06-02 13:05:28.160863 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 13:05:28.162078 | orchestrator | Monday 02 June 2025 13:05:28 +0000 (0:00:00.617) 0:00:43.343 *********** 2025-06-02 13:05:28.312727 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:28.554663 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:28.634992 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:28.722177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:28.802681 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:28.846332 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:28.847005 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:28.848904 | orchestrator | 2025-06-02 13:05:28.849742 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:05:28.850131 | orchestrator | 2025-06-02 13:05:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:05:28.850846 | orchestrator | 2025-06-02 13:05:28 | INFO  | Please wait and do not abort execution. 2025-06-02 13:05:28.852266 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:05:28.852999 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.854061 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.855163 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.856294 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.856883 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.857669 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:05:28.858457 | orchestrator | 2025-06-02 13:05:28.859085 | orchestrator | 2025-06-02 13:05:28.859972 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:05:28.860853 | orchestrator | Monday 02 June 2025 13:05:28 +0000 (0:00:00.688) 0:00:44.031 *********** 2025-06-02 13:05:28.861802 | orchestrator | =============================================================================== 2025-06-02 13:05:28.865950 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.80s 2025-06-02 13:05:28.866468 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.77s 2025-06-02 13:05:28.867036 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.70s 2025-06-02 13:05:28.869750 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.34s 2025-06-02 13:05:28.869777 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2025-06-02 13:05:28.869789 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-06-02 13:05:28.869800 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2025-06-02 13:05:28.871804 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.84s 2025-06-02 13:05:28.872477 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2025-06-02 13:05:28.873182 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2025-06-02 13:05:28.873206 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-06-02 13:05:28.873648 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2025-06-02 13:05:28.874117 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.26s 2025-06-02 13:05:28.874554 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2025-06-02 13:05:28.874925 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-06-02 13:05:28.875326 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2025-06-02 13:05:28.875715 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2025-06-02 13:05:28.876164 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-06-02 13:05:28.876529 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2025-06-02 13:05:28.877008 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2025-06-02 13:05:29.439299 | orchestrator | + osism apply wireguard 2025-06-02 13:05:31.074136 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:05:31.074236 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:05:31.074252 | orchestrator | Registering Redlock._release_script 2025-06-02 13:05:31.136793 | orchestrator | 2025-06-02 13:05:31 | INFO  | Task c7257a42-dd71-464e-81f7-0c13d01b9add (wireguard) was prepared for execution. 2025-06-02 13:05:31.136935 | orchestrator | 2025-06-02 13:05:31 | INFO  | It takes a moment until task c7257a42-dd71-464e-81f7-0c13d01b9add (wireguard) has been started and output is visible here. 2025-06-02 13:05:35.206494 | orchestrator | 2025-06-02 13:05:35.207698 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 13:05:35.210874 | orchestrator | 2025-06-02 13:05:35.213641 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 13:05:35.213771 | orchestrator | Monday 02 June 2025 13:05:35 +0000 (0:00:00.232) 0:00:00.232 *********** 2025-06-02 13:05:36.691335 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:36.691501 | orchestrator | 2025-06-02 13:05:36.692564 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 13:05:36.693049 | orchestrator | Monday 02 June 2025 13:05:36 +0000 (0:00:01.484) 0:00:01.716 *********** 2025-06-02 13:05:42.910960 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:42.911989 | orchestrator | 2025-06-02 13:05:42.913189 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 13:05:42.914246 | orchestrator | Monday 02 June 2025 13:05:42 +0000 (0:00:06.221) 0:00:07.938 *********** 2025-06-02 13:05:43.454136 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:43.455412 | orchestrator | 2025-06-02 13:05:43.455444 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 13:05:43.456956 | orchestrator | Monday 02 June 2025 13:05:43 +0000 (0:00:00.544) 0:00:08.482 *********** 2025-06-02 13:05:43.870871 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:43.871384 | orchestrator | 2025-06-02 13:05:43.872255 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 13:05:43.873491 | orchestrator | Monday 02 June 2025 13:05:43 +0000 (0:00:00.416) 0:00:08.899 *********** 2025-06-02 13:05:44.397718 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:44.399862 | orchestrator | 2025-06-02 13:05:44.399911 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 13:05:44.401425 | orchestrator | Monday 02 June 2025 13:05:44 +0000 (0:00:00.525) 0:00:09.424 *********** 2025-06-02 13:05:44.937065 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:44.937363 | orchestrator | 2025-06-02 13:05:44.940925 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 13:05:44.941975 | orchestrator | Monday 02 June 2025 13:05:44 +0000 (0:00:00.541) 0:00:09.965 *********** 2025-06-02 13:05:45.367484 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:45.368581 | orchestrator | 2025-06-02 13:05:45.369437 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 13:05:45.370162 | orchestrator | Monday 02 June 2025 13:05:45 +0000 (0:00:00.430) 0:00:10.396 *********** 2025-06-02 13:05:46.579073 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:46.579863 | orchestrator | 2025-06-02 13:05:46.580954 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 13:05:46.582221 | orchestrator | Monday 02 June 2025 13:05:46 +0000 (0:00:01.209) 0:00:11.605 *********** 2025-06-02 13:05:47.497473 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 13:05:47.498333 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:47.499767 | orchestrator | 2025-06-02 13:05:47.501035 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 13:05:47.502069 | orchestrator | Monday 02 June 2025 13:05:47 +0000 (0:00:00.919) 0:00:12.525 *********** 2025-06-02 13:05:49.131248 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:49.132822 | orchestrator | 2025-06-02 13:05:49.133856 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 13:05:49.135841 | orchestrator | Monday 02 June 2025 13:05:49 +0000 (0:00:01.633) 0:00:14.159 *********** 2025-06-02 13:05:50.105081 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:50.105236 | orchestrator | 2025-06-02 13:05:50.106542 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:05:50.106945 | orchestrator | 2025-06-02 13:05:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:05:50.106970 | orchestrator | 2025-06-02 13:05:50 | INFO  | Please wait and do not abort execution. 2025-06-02 13:05:50.107339 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:05:50.108339 | orchestrator | 2025-06-02 13:05:50.109395 | orchestrator | 2025-06-02 13:05:50.110673 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:05:50.111395 | orchestrator | Monday 02 June 2025 13:05:50 +0000 (0:00:00.974) 0:00:15.133 *********** 2025-06-02 13:05:50.111924 | orchestrator | =============================================================================== 2025-06-02 13:05:50.112668 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.22s 2025-06-02 13:05:50.113304 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.63s 2025-06-02 13:05:50.113671 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.48s 2025-06-02 13:05:50.114164 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-06-02 13:05:50.114887 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-06-02 13:05:50.115214 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-06-02 13:05:50.115944 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-02 13:05:50.116285 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-06-02 13:05:50.116959 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-06-02 13:05:50.117569 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-06-02 13:05:50.117906 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-02 13:05:50.783406 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 13:05:50.824041 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 13:05:50.824136 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 13:05:50.909313 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 174 0 --:--:-- --:--:-- --:--:-- 174 2025-06-02 13:05:50.925945 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 13:05:52.549862 | orchestrator | 2025-06-02 13:05:52 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 13:05:52.554300 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:05:52.554355 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:05:52.554377 | orchestrator | Registering Redlock._release_script 2025-06-02 13:05:52.618268 | orchestrator | 2025-06-02 13:05:52 | INFO  | Task 3abbc2e3-9794-4e3a-b617-336de3350f23 (workarounds) was prepared for execution. 2025-06-02 13:05:52.618357 | orchestrator | 2025-06-02 13:05:52 | INFO  | It takes a moment until task 3abbc2e3-9794-4e3a-b617-336de3350f23 (workarounds) has been started and output is visible here. 2025-06-02 13:05:56.488253 | orchestrator | 2025-06-02 13:05:56.491385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:05:56.491982 | orchestrator | 2025-06-02 13:05:56.493879 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 13:05:56.494324 | orchestrator | Monday 02 June 2025 13:05:56 +0000 (0:00:00.132) 0:00:00.132 *********** 2025-06-02 13:05:56.638126 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 13:05:56.714436 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 13:05:56.788895 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 13:05:56.863162 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 13:05:57.011538 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 13:05:57.147412 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 13:05:57.147690 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 13:05:57.148749 | orchestrator | 2025-06-02 13:05:57.150006 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 13:05:57.150348 | orchestrator | 2025-06-02 13:05:57.150841 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 13:05:57.151239 | orchestrator | Monday 02 June 2025 13:05:57 +0000 (0:00:00.661) 0:00:00.793 *********** 2025-06-02 13:05:59.386904 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:59.389764 | orchestrator | 2025-06-02 13:05:59.390836 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 13:05:59.391630 | orchestrator | 2025-06-02 13:05:59.392273 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 13:05:59.392828 | orchestrator | Monday 02 June 2025 13:05:59 +0000 (0:00:02.235) 0:00:03.028 *********** 2025-06-02 13:06:01.184067 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:01.187634 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:01.187677 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:01.187690 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:01.188403 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:01.189455 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:01.190365 | orchestrator | 2025-06-02 13:06:01.191043 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 13:06:01.192137 | orchestrator | 2025-06-02 13:06:01.192869 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 13:06:01.193315 | orchestrator | Monday 02 June 2025 13:06:01 +0000 (0:00:01.796) 0:00:04.825 *********** 2025-06-02 13:06:02.741060 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.741822 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.743160 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.743442 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.744136 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.744741 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:02.745559 | orchestrator | 2025-06-02 13:06:02.746272 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 13:06:02.747098 | orchestrator | Monday 02 June 2025 13:06:02 +0000 (0:00:01.557) 0:00:06.382 *********** 2025-06-02 13:06:06.382392 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:06.382524 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:06.383289 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:06.383464 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:06.385504 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:06.385751 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:06.386735 | orchestrator | 2025-06-02 13:06:06.387453 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 13:06:06.387915 | orchestrator | Monday 02 June 2025 13:06:06 +0000 (0:00:03.642) 0:00:10.025 *********** 2025-06-02 13:06:06.534410 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:06.613513 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:06.692022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:06.768528 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:07.068584 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:07.068882 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:07.070719 | orchestrator | 2025-06-02 13:06:07.072376 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 13:06:07.072409 | orchestrator | 2025-06-02 13:06:07.072951 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 13:06:07.073385 | orchestrator | Monday 02 June 2025 13:06:07 +0000 (0:00:00.688) 0:00:10.714 *********** 2025-06-02 13:06:08.680499 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:08.680635 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:08.684745 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:08.685727 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:08.686520 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:08.687296 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:08.687944 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:08.689694 | orchestrator | 2025-06-02 13:06:08.690447 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 13:06:08.690928 | orchestrator | Monday 02 June 2025 13:06:08 +0000 (0:00:01.610) 0:00:12.324 *********** 2025-06-02 13:06:10.315003 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:10.315162 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:10.316240 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:10.316988 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:10.317735 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:10.318247 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:10.318831 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:10.320700 | orchestrator | 2025-06-02 13:06:10.321207 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 13:06:10.321709 | orchestrator | Monday 02 June 2025 13:06:10 +0000 (0:00:01.631) 0:00:13.956 *********** 2025-06-02 13:06:11.900165 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:11.900820 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:11.902114 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:11.903153 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:11.904042 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:11.904778 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:11.906507 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:11.907252 | orchestrator | 2025-06-02 13:06:11.909423 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 13:06:11.909458 | orchestrator | Monday 02 June 2025 13:06:11 +0000 (0:00:01.588) 0:00:15.544 *********** 2025-06-02 13:06:14.521244 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:14.521353 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:14.521439 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:14.522261 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:14.524721 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:14.525646 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:14.526526 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:14.527880 | orchestrator | 2025-06-02 13:06:14.528761 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 13:06:14.529948 | orchestrator | Monday 02 June 2025 13:06:14 +0000 (0:00:02.616) 0:00:18.160 *********** 2025-06-02 13:06:14.704364 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:06:14.782985 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:14.862668 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:14.941019 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:15.017143 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:15.150434 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:15.151172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:15.152440 | orchestrator | 2025-06-02 13:06:15.155448 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 13:06:15.155484 | orchestrator | 2025-06-02 13:06:15.155496 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 13:06:15.155508 | orchestrator | Monday 02 June 2025 13:06:15 +0000 (0:00:00.635) 0:00:18.796 *********** 2025-06-02 13:06:17.714961 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:17.715447 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:17.716361 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:17.717977 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:17.719885 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:17.720369 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:17.721715 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:17.722559 | orchestrator | 2025-06-02 13:06:17.723374 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:17.724421 | orchestrator | 2025-06-02 13:06:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:17.724447 | orchestrator | 2025-06-02 13:06:17 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:17.725059 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:06:17.725319 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.726009 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.726662 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.727407 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.727717 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.728349 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:17.728669 | orchestrator | 2025-06-02 13:06:17.729111 | orchestrator | 2025-06-02 13:06:17.729631 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:17.730095 | orchestrator | Monday 02 June 2025 13:06:17 +0000 (0:00:02.560) 0:00:21.358 *********** 2025-06-02 13:06:17.730431 | orchestrator | =============================================================================== 2025-06-02 13:06:17.730956 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2025-06-02 13:06:17.731253 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.62s 2025-06-02 13:06:17.731806 | orchestrator | Install python3-docker -------------------------------------------------- 2.56s 2025-06-02 13:06:17.732067 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2025-06-02 13:06:17.732902 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-06-02 13:06:17.733505 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-06-02 13:06:17.734323 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-06-02 13:06:17.735052 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2025-06-02 13:06:17.736042 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2025-06-02 13:06:17.736373 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-06-02 13:06:17.737059 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.66s 2025-06-02 13:06:17.737352 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-06-02 13:06:18.291454 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 13:06:19.997466 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:06:19.997568 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:06:19.997612 | orchestrator | Registering Redlock._release_script 2025-06-02 13:06:20.061208 | orchestrator | 2025-06-02 13:06:20 | INFO  | Task 34750cc7-eebe-4f68-891e-9cd2f5eff513 (reboot) was prepared for execution. 2025-06-02 13:06:20.061301 | orchestrator | 2025-06-02 13:06:20 | INFO  | It takes a moment until task 34750cc7-eebe-4f68-891e-9cd2f5eff513 (reboot) has been started and output is visible here. 2025-06-02 13:06:24.077406 | orchestrator | 2025-06-02 13:06:24.079791 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:24.080707 | orchestrator | 2025-06-02 13:06:24.081752 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:24.083251 | orchestrator | Monday 02 June 2025 13:06:24 +0000 (0:00:00.211) 0:00:00.211 *********** 2025-06-02 13:06:24.175940 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:24.176100 | orchestrator | 2025-06-02 13:06:24.177370 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:24.178122 | orchestrator | Monday 02 June 2025 13:06:24 +0000 (0:00:00.101) 0:00:00.313 *********** 2025-06-02 13:06:25.113823 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:25.114489 | orchestrator | 2025-06-02 13:06:25.115921 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:25.115947 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.938) 0:00:01.251 *********** 2025-06-02 13:06:25.224802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:25.227346 | orchestrator | 2025-06-02 13:06:25.227392 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:25.227406 | orchestrator | 2025-06-02 13:06:25.228132 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:25.229021 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.109) 0:00:01.361 *********** 2025-06-02 13:06:25.336650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:25.336752 | orchestrator | 2025-06-02 13:06:25.337254 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:25.337753 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.113) 0:00:01.474 *********** 2025-06-02 13:06:25.990810 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:25.990908 | orchestrator | 2025-06-02 13:06:25.991337 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:25.991973 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.651) 0:00:02.126 *********** 2025-06-02 13:06:26.094526 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:26.094767 | orchestrator | 2025-06-02 13:06:26.095125 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:26.095744 | orchestrator | 2025-06-02 13:06:26.096754 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:26.097610 | orchestrator | Monday 02 June 2025 13:06:26 +0000 (0:00:00.104) 0:00:02.230 *********** 2025-06-02 13:06:26.353284 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:26.353391 | orchestrator | 2025-06-02 13:06:26.354689 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:26.354719 | orchestrator | Monday 02 June 2025 13:06:26 +0000 (0:00:00.258) 0:00:02.488 *********** 2025-06-02 13:06:27.012376 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:27.012625 | orchestrator | 2025-06-02 13:06:27.013175 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:27.013831 | orchestrator | Monday 02 June 2025 13:06:27 +0000 (0:00:00.660) 0:00:03.149 *********** 2025-06-02 13:06:27.161635 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:27.161808 | orchestrator | 2025-06-02 13:06:27.162301 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:27.162549 | orchestrator | 2025-06-02 13:06:27.164477 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:27.164499 | orchestrator | Monday 02 June 2025 13:06:27 +0000 (0:00:00.146) 0:00:03.296 *********** 2025-06-02 13:06:27.268997 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:27.269398 | orchestrator | 2025-06-02 13:06:27.270615 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:27.271374 | orchestrator | Monday 02 June 2025 13:06:27 +0000 (0:00:00.110) 0:00:03.407 *********** 2025-06-02 13:06:27.899319 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:27.899500 | orchestrator | 2025-06-02 13:06:27.900169 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:27.901094 | orchestrator | Monday 02 June 2025 13:06:27 +0000 (0:00:00.629) 0:00:04.037 *********** 2025-06-02 13:06:28.057893 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:28.058664 | orchestrator | 2025-06-02 13:06:28.059754 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:28.060193 | orchestrator | 2025-06-02 13:06:28.061127 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:28.061966 | orchestrator | Monday 02 June 2025 13:06:28 +0000 (0:00:00.144) 0:00:04.182 *********** 2025-06-02 13:06:28.167219 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:28.167320 | orchestrator | 2025-06-02 13:06:28.169953 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:28.170673 | orchestrator | Monday 02 June 2025 13:06:28 +0000 (0:00:00.120) 0:00:04.302 *********** 2025-06-02 13:06:28.827180 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:28.828166 | orchestrator | 2025-06-02 13:06:28.829226 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:28.829967 | orchestrator | Monday 02 June 2025 13:06:28 +0000 (0:00:00.663) 0:00:04.965 *********** 2025-06-02 13:06:28.947654 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:28.948710 | orchestrator | 2025-06-02 13:06:28.949705 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:06:28.950846 | orchestrator | 2025-06-02 13:06:28.951502 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:06:28.952320 | orchestrator | Monday 02 June 2025 13:06:28 +0000 (0:00:00.117) 0:00:05.083 *********** 2025-06-02 13:06:29.055671 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:29.055756 | orchestrator | 2025-06-02 13:06:29.055769 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:06:29.056137 | orchestrator | Monday 02 June 2025 13:06:29 +0000 (0:00:00.106) 0:00:05.189 *********** 2025-06-02 13:06:29.806313 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:29.806485 | orchestrator | 2025-06-02 13:06:29.806959 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:06:29.807660 | orchestrator | Monday 02 June 2025 13:06:29 +0000 (0:00:00.751) 0:00:05.941 *********** 2025-06-02 13:06:29.846520 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:29.847752 | orchestrator | 2025-06-02 13:06:29.847985 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:29.848604 | orchestrator | 2025-06-02 13:06:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:29.848810 | orchestrator | 2025-06-02 13:06:29 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:29.849721 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.850607 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.851628 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.852237 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.853216 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.854175 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:29.854572 | orchestrator | 2025-06-02 13:06:29.855359 | orchestrator | 2025-06-02 13:06:29.855886 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:29.856447 | orchestrator | Monday 02 June 2025 13:06:29 +0000 (0:00:00.043) 0:00:05.985 *********** 2025-06-02 13:06:29.857118 | orchestrator | =============================================================================== 2025-06-02 13:06:29.857607 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2025-06-02 13:06:29.858305 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2025-06-02 13:06:29.858678 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2025-06-02 13:06:30.494955 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 13:06:32.173851 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:06:32.173929 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:06:32.173948 | orchestrator | Registering Redlock._release_script 2025-06-02 13:06:32.233856 | orchestrator | 2025-06-02 13:06:32 | INFO  | Task c8930c30-c683-4696-9da3-467b28543a09 (wait-for-connection) was prepared for execution. 2025-06-02 13:06:32.233927 | orchestrator | 2025-06-02 13:06:32 | INFO  | It takes a moment until task c8930c30-c683-4696-9da3-467b28543a09 (wait-for-connection) has been started and output is visible here. 2025-06-02 13:06:36.556733 | orchestrator | 2025-06-02 13:06:36.556846 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 13:06:36.559981 | orchestrator | 2025-06-02 13:06:36.560009 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 13:06:36.560612 | orchestrator | Monday 02 June 2025 13:06:36 +0000 (0:00:00.318) 0:00:00.318 *********** 2025-06-02 13:06:48.209808 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:48.209922 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:48.211206 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:48.213108 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:48.215533 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:48.217356 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:48.218382 | orchestrator | 2025-06-02 13:06:48.219408 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:48.219828 | orchestrator | 2025-06-02 13:06:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:48.220354 | orchestrator | 2025-06-02 13:06:48 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:48.221432 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.222241 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.223416 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.224361 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.224872 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.225485 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:48.225816 | orchestrator | 2025-06-02 13:06:48.226649 | orchestrator | 2025-06-02 13:06:48.227192 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:48.227541 | orchestrator | Monday 02 June 2025 13:06:48 +0000 (0:00:11.651) 0:00:11.970 *********** 2025-06-02 13:06:48.228224 | orchestrator | =============================================================================== 2025-06-02 13:06:48.228674 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2025-06-02 13:06:48.995391 | orchestrator | + osism apply hddtemp 2025-06-02 13:06:50.833244 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:06:50.833344 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:06:50.833358 | orchestrator | Registering Redlock._release_script 2025-06-02 13:06:50.893883 | orchestrator | 2025-06-02 13:06:50 | INFO  | Task cc04471b-3bb2-47b4-a53e-8032a2f052ef (hddtemp) was prepared for execution. 2025-06-02 13:06:50.893986 | orchestrator | 2025-06-02 13:06:50 | INFO  | It takes a moment until task cc04471b-3bb2-47b4-a53e-8032a2f052ef (hddtemp) has been started and output is visible here. 2025-06-02 13:06:55.022182 | orchestrator | 2025-06-02 13:06:55.023134 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 13:06:55.024987 | orchestrator | 2025-06-02 13:06:55.026177 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 13:06:55.026596 | orchestrator | Monday 02 June 2025 13:06:55 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 13:06:55.177343 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:55.255991 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:55.334738 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:55.413253 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:55.599820 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:55.734961 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:55.736242 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:55.736963 | orchestrator | 2025-06-02 13:06:55.737692 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 13:06:55.738387 | orchestrator | Monday 02 June 2025 13:06:55 +0000 (0:00:00.713) 0:00:00.975 *********** 2025-06-02 13:06:56.995059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:06:56.995152 | orchestrator | 2025-06-02 13:06:56.996199 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 13:06:56.996621 | orchestrator | Monday 02 June 2025 13:06:56 +0000 (0:00:01.258) 0:00:02.233 *********** 2025-06-02 13:06:58.961156 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:58.965642 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:58.965757 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:58.966863 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:58.967993 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:58.969359 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:58.969730 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:58.971340 | orchestrator | 2025-06-02 13:06:58.972350 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 13:06:58.973214 | orchestrator | Monday 02 June 2025 13:06:58 +0000 (0:00:01.968) 0:00:04.202 *********** 2025-06-02 13:06:59.624353 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:59.717072 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:00.210601 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:00.212368 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:00.213448 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:00.215765 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:00.217225 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:00.218679 | orchestrator | 2025-06-02 13:07:00.221817 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 13:07:00.222796 | orchestrator | Monday 02 June 2025 13:07:00 +0000 (0:00:01.245) 0:00:05.447 *********** 2025-06-02 13:07:01.387114 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:01.387261 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:01.387336 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:01.387918 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:01.388596 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:01.389262 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:01.389766 | orchestrator | ok: [testbed-manager] 2025-06-02 13:07:01.390816 | orchestrator | 2025-06-02 13:07:01.391001 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 13:07:01.391286 | orchestrator | Monday 02 June 2025 13:07:01 +0000 (0:00:01.177) 0:00:06.624 *********** 2025-06-02 13:07:01.849468 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:07:01.928697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:07:02.005677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:07:02.092021 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:02.289920 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:07:02.290756 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:07:02.291778 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:07:02.292501 | orchestrator | 2025-06-02 13:07:02.293525 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 13:07:02.296854 | orchestrator | Monday 02 June 2025 13:07:02 +0000 (0:00:00.904) 0:00:07.529 *********** 2025-06-02 13:07:15.004176 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:15.004314 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:15.004331 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:15.004343 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:15.005519 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:15.006849 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:15.007849 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:15.009028 | orchestrator | 2025-06-02 13:07:15.009991 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 13:07:15.010882 | orchestrator | Monday 02 June 2025 13:07:14 +0000 (0:00:12.710) 0:00:20.240 *********** 2025-06-02 13:07:16.520357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:07:16.521288 | orchestrator | 2025-06-02 13:07:16.522410 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 13:07:16.523189 | orchestrator | Monday 02 June 2025 13:07:16 +0000 (0:00:01.518) 0:00:21.758 *********** 2025-06-02 13:07:18.548821 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:18.549179 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:18.552184 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:18.552924 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:18.554811 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:18.555406 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:18.556374 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:18.556905 | orchestrator | 2025-06-02 13:07:18.557520 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:07:18.557955 | orchestrator | 2025-06-02 13:07:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:07:18.558943 | orchestrator | 2025-06-02 13:07:18 | INFO  | Please wait and do not abort execution. 2025-06-02 13:07:18.559629 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:18.560406 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.561130 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.561734 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.562580 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.563102 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.564115 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:18.564675 | orchestrator | 2025-06-02 13:07:18.565457 | orchestrator | 2025-06-02 13:07:18.565719 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:07:18.566741 | orchestrator | Monday 02 June 2025 13:07:18 +0000 (0:00:02.031) 0:00:23.790 *********** 2025-06-02 13:07:18.567390 | orchestrator | =============================================================================== 2025-06-02 13:07:18.567841 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.71s 2025-06-02 13:07:18.568626 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.03s 2025-06-02 13:07:18.568896 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2025-06-02 13:07:18.569526 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.52s 2025-06-02 13:07:18.570155 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2025-06-02 13:07:18.570334 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2025-06-02 13:07:18.570698 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2025-06-02 13:07:18.571259 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2025-06-02 13:07:18.571485 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-06-02 13:07:19.203700 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-02 13:07:20.682014 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 13:07:20.682178 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 13:07:20.682195 | orchestrator | + local max_attempts=60 2025-06-02 13:07:20.682209 | orchestrator | + local name=ceph-ansible 2025-06-02 13:07:20.682220 | orchestrator | + local attempt_num=1 2025-06-02 13:07:20.682662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 13:07:20.731444 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:20.731542 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 13:07:20.731637 | orchestrator | + local max_attempts=60 2025-06-02 13:07:20.731651 | orchestrator | + local name=kolla-ansible 2025-06-02 13:07:20.731663 | orchestrator | + local attempt_num=1 2025-06-02 13:07:20.732166 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 13:07:20.763688 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:20.763780 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 13:07:20.763794 | orchestrator | + local max_attempts=60 2025-06-02 13:07:20.763808 | orchestrator | + local name=osism-ansible 2025-06-02 13:07:20.763820 | orchestrator | + local attempt_num=1 2025-06-02 13:07:20.763942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 13:07:20.790820 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:20.790910 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 13:07:20.790925 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 13:07:20.944745 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 13:07:21.125536 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 13:07:21.313272 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 13:07:21.479783 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 13:07:21.480346 | orchestrator | + osism apply gather-facts 2025-06-02 13:07:23.225171 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:23.225278 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:23.225293 | orchestrator | Registering Redlock._release_script 2025-06-02 13:07:23.286389 | orchestrator | 2025-06-02 13:07:23 | INFO  | Task d7bf8ad7-5b16-4d9b-9dac-7489c8ed8e77 (gather-facts) was prepared for execution. 2025-06-02 13:07:23.286536 | orchestrator | 2025-06-02 13:07:23 | INFO  | It takes a moment until task d7bf8ad7-5b16-4d9b-9dac-7489c8ed8e77 (gather-facts) has been started and output is visible here. 2025-06-02 13:07:27.397271 | orchestrator | 2025-06-02 13:07:27.397453 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:07:27.398130 | orchestrator | 2025-06-02 13:07:27.401474 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:07:27.401501 | orchestrator | Monday 02 June 2025 13:07:27 +0000 (0:00:00.227) 0:00:00.227 *********** 2025-06-02 13:07:32.818096 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:32.821049 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:32.821248 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:32.822079 | orchestrator | ok: [testbed-manager] 2025-06-02 13:07:32.826000 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:32.826168 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:32.827498 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:32.828419 | orchestrator | 2025-06-02 13:07:32.829580 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:07:32.830610 | orchestrator | 2025-06-02 13:07:32.830879 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:07:32.832519 | orchestrator | Monday 02 June 2025 13:07:32 +0000 (0:00:05.422) 0:00:05.650 *********** 2025-06-02 13:07:33.033000 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:07:33.113720 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:07:33.198485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:07:33.279720 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:07:33.354190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:07:33.390600 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:07:33.391910 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:07:33.392276 | orchestrator | 2025-06-02 13:07:33.393698 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:07:33.394445 | orchestrator | 2025-06-02 13:07:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:07:33.394477 | orchestrator | 2025-06-02 13:07:33 | INFO  | Please wait and do not abort execution. 2025-06-02 13:07:33.395470 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.396263 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.397300 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.397988 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.398741 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.399161 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.399977 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:33.400995 | orchestrator | 2025-06-02 13:07:33.402069 | orchestrator | 2025-06-02 13:07:33.402899 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:07:33.403495 | orchestrator | Monday 02 June 2025 13:07:33 +0000 (0:00:00.574) 0:00:06.224 *********** 2025-06-02 13:07:33.404780 | orchestrator | =============================================================================== 2025-06-02 13:07:33.405378 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.42s 2025-06-02 13:07:33.406167 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-06-02 13:07:34.059635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 13:07:34.081470 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 13:07:34.104276 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 13:07:34.118131 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 13:07:34.131186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 13:07:34.145094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 13:07:34.155240 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 13:07:34.165424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 13:07:34.180205 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 13:07:34.201009 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 13:07:34.217635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 13:07:34.236730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 13:07:34.256602 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 13:07:34.277486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 13:07:34.294221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 13:07:34.308585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 13:07:34.329756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 13:07:34.345710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 13:07:34.364373 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 13:07:34.379418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 13:07:34.401216 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 13:07:34.750366 | orchestrator | ok: Runtime: 0:18:37.848244 2025-06-02 13:07:34.862190 | 2025-06-02 13:07:34.862334 | TASK [Deploy services] 2025-06-02 13:07:35.396356 | orchestrator | skipping: Conditional result was False 2025-06-02 13:07:35.414775 | 2025-06-02 13:07:35.418555 | TASK [Deploy in a nutshell] 2025-06-02 13:07:36.207985 | orchestrator | + set -e 2025-06-02 13:07:36.208185 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 13:07:36.208210 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 13:07:36.208232 | orchestrator | ++ INTERACTIVE=false 2025-06-02 13:07:36.208246 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 13:07:36.208259 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 13:07:36.208287 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 13:07:36.208339 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 13:07:36.208368 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 13:07:36.208383 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 13:07:36.208404 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 13:07:36.208417 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 13:07:36.208435 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 13:07:36.208446 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 13:07:36.208468 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 13:07:36.208487 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 13:07:36.208509 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 13:07:36.208528 | orchestrator | ++ export ARA=false 2025-06-02 13:07:36.208574 | orchestrator | ++ ARA=false 2025-06-02 13:07:36.208586 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 13:07:36.208602 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 13:07:36.208613 | orchestrator | ++ export TEMPEST=false 2025-06-02 13:07:36.208624 | orchestrator | ++ TEMPEST=false 2025-06-02 13:07:36.208634 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 13:07:36.208646 | orchestrator | ++ IS_ZUUL=true 2025-06-02 13:07:36.208656 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 13:07:36.208668 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.129 2025-06-02 13:07:36.208679 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 13:07:36.208689 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 13:07:36.208700 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 13:07:36.208717 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 13:07:36.208728 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 13:07:36.208739 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 13:07:36.208750 | orchestrator | 2025-06-02 13:07:36.208762 | orchestrator | # PULL IMAGES 2025-06-02 13:07:36.208773 | orchestrator | 2025-06-02 13:07:36.208784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 13:07:36.208801 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 13:07:36.208812 | orchestrator | + echo 2025-06-02 13:07:36.208823 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 13:07:36.208834 | orchestrator | + echo 2025-06-02 13:07:36.210121 | orchestrator | ++ semver latest 7.0.0 2025-06-02 13:07:36.267915 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 13:07:36.267990 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 13:07:36.268008 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 13:07:37.987387 | orchestrator | 2025-06-02 13:07:37 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 13:07:37.992290 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:37.992330 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:37.992343 | orchestrator | Registering Redlock._release_script 2025-06-02 13:07:38.050850 | orchestrator | 2025-06-02 13:07:38 | INFO  | Task 66a4ba44-2e1f-412c-bd2e-bca4214bb078 (pull-images) was prepared for execution. 2025-06-02 13:07:38.050936 | orchestrator | 2025-06-02 13:07:38 | INFO  | It takes a moment until task 66a4ba44-2e1f-412c-bd2e-bca4214bb078 (pull-images) has been started and output is visible here. 2025-06-02 13:07:42.045210 | orchestrator | 2025-06-02 13:07:42.045318 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 13:07:42.046329 | orchestrator | 2025-06-02 13:07:42.047585 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 13:07:42.049348 | orchestrator | Monday 02 June 2025 13:07:42 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-02 13:08:48.338133 | orchestrator | changed: [testbed-manager] 2025-06-02 13:08:48.338291 | orchestrator | 2025-06-02 13:08:48.338317 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 13:08:48.338331 | orchestrator | Monday 02 June 2025 13:08:48 +0000 (0:01:06.293) 0:01:06.459 *********** 2025-06-02 13:09:39.483626 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 13:09:39.483756 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 13:09:39.483815 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 13:09:39.483913 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 13:09:39.485235 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 13:09:39.488012 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 13:09:39.488518 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 13:09:39.489086 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 13:09:39.489608 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 13:09:39.490563 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 13:09:39.491294 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 13:09:39.491921 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 13:09:39.492790 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 13:09:39.493824 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 13:09:39.493956 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 13:09:39.494680 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 13:09:39.494843 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 13:09:39.495620 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 13:09:39.496133 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 13:09:39.496240 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 13:09:39.496837 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 13:09:39.497341 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 13:09:39.497714 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 13:09:39.498167 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 13:09:39.498598 | orchestrator | 2025-06-02 13:09:39.499071 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:09:39.499406 | orchestrator | 2025-06-02 13:09:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:09:39.499531 | orchestrator | 2025-06-02 13:09:39 | INFO  | Please wait and do not abort execution. 2025-06-02 13:09:39.500307 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:09:39.500522 | orchestrator | 2025-06-02 13:09:39.500994 | orchestrator | 2025-06-02 13:09:39.501233 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:09:39.501763 | orchestrator | Monday 02 June 2025 13:09:39 +0000 (0:00:51.145) 0:01:57.605 *********** 2025-06-02 13:09:39.502402 | orchestrator | =============================================================================== 2025-06-02 13:09:39.503682 | orchestrator | Pull keystone image ---------------------------------------------------- 66.29s 2025-06-02 13:09:39.504497 | orchestrator | Pull other images ------------------------------------------------------ 51.15s 2025-06-02 13:09:41.581677 | orchestrator | 2025-06-02 13:09:41 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 13:09:41.585741 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:09:41.585792 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:09:41.586217 | orchestrator | Registering Redlock._release_script 2025-06-02 13:09:41.637331 | orchestrator | 2025-06-02 13:09:41 | INFO  | Task ddcd498b-5f02-4919-b963-a4c2caf3695b (wipe-partitions) was prepared for execution. 2025-06-02 13:09:41.637407 | orchestrator | 2025-06-02 13:09:41 | INFO  | It takes a moment until task ddcd498b-5f02-4919-b963-a4c2caf3695b (wipe-partitions) has been started and output is visible here. 2025-06-02 13:09:45.256859 | orchestrator | 2025-06-02 13:09:45.256963 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 13:09:45.257026 | orchestrator | 2025-06-02 13:09:45.257056 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 13:09:45.257146 | orchestrator | Monday 02 June 2025 13:09:45 +0000 (0:00:00.120) 0:00:00.121 *********** 2025-06-02 13:09:45.856914 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:09:45.859670 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:09:45.859869 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:09:45.860171 | orchestrator | 2025-06-02 13:09:45.862652 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 13:09:45.862733 | orchestrator | Monday 02 June 2025 13:09:45 +0000 (0:00:00.602) 0:00:00.723 *********** 2025-06-02 13:09:45.997954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:09:46.077972 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:09:46.078105 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:09:46.078122 | orchestrator | 2025-06-02 13:09:46.078403 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 13:09:46.080066 | orchestrator | Monday 02 June 2025 13:09:46 +0000 (0:00:00.220) 0:00:00.943 *********** 2025-06-02 13:09:46.723335 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:09:46.723994 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:09:46.724516 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:09:46.726291 | orchestrator | 2025-06-02 13:09:46.726431 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 13:09:46.726823 | orchestrator | Monday 02 June 2025 13:09:46 +0000 (0:00:00.646) 0:00:01.590 *********** 2025-06-02 13:09:46.894097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:09:46.990610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:09:46.990701 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:09:46.992161 | orchestrator | 2025-06-02 13:09:46.992203 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 13:09:46.992522 | orchestrator | Monday 02 June 2025 13:09:46 +0000 (0:00:00.265) 0:00:01.855 *********** 2025-06-02 13:09:48.135037 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:09:48.135204 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:09:48.135600 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:09:48.135997 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:09:48.136501 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:09:48.139113 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:09:48.139420 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:09:48.140024 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:09:48.140321 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:09:48.140800 | orchestrator | 2025-06-02 13:09:48.141189 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 13:09:48.141645 | orchestrator | Monday 02 June 2025 13:09:48 +0000 (0:00:01.144) 0:00:02.999 *********** 2025-06-02 13:09:49.588557 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:09:49.588682 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:09:49.589534 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:09:49.590075 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:09:49.590255 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:09:49.592119 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:09:49.592536 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:09:49.592885 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:09:49.594070 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:09:49.594296 | orchestrator | 2025-06-02 13:09:49.594996 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 13:09:49.595159 | orchestrator | Monday 02 June 2025 13:09:49 +0000 (0:00:01.449) 0:00:04.449 *********** 2025-06-02 13:09:53.072680 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:09:53.072808 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:09:53.072833 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:09:53.072852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:09:53.073293 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:09:53.073431 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:09:53.076324 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:09:53.078896 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:09:53.078952 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:09:53.079969 | orchestrator | 2025-06-02 13:09:53.080624 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 13:09:53.086114 | orchestrator | Monday 02 June 2025 13:09:53 +0000 (0:00:03.483) 0:00:07.932 *********** 2025-06-02 13:09:53.723400 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:09:53.727922 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:09:53.730004 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:09:53.730284 | orchestrator | 2025-06-02 13:09:53.731558 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 13:09:53.733107 | orchestrator | Monday 02 June 2025 13:09:53 +0000 (0:00:00.654) 0:00:08.587 *********** 2025-06-02 13:09:54.407574 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:09:54.408808 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:09:54.409306 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:09:54.410430 | orchestrator | 2025-06-02 13:09:54.411182 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:09:54.412294 | orchestrator | 2025-06-02 13:09:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:09:54.412326 | orchestrator | 2025-06-02 13:09:54 | INFO  | Please wait and do not abort execution. 2025-06-02 13:09:54.414273 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:09:54.414708 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:09:54.415457 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:09:54.416217 | orchestrator | 2025-06-02 13:09:54.417066 | orchestrator | 2025-06-02 13:09:54.417819 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:09:54.417837 | orchestrator | Monday 02 June 2025 13:09:54 +0000 (0:00:00.680) 0:00:09.268 *********** 2025-06-02 13:09:54.417993 | orchestrator | =============================================================================== 2025-06-02 13:09:54.418528 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.48s 2025-06-02 13:09:54.419208 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.45s 2025-06-02 13:09:54.420153 | orchestrator | Check device availability ----------------------------------------------- 1.14s 2025-06-02 13:09:54.420416 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2025-06-02 13:09:54.420526 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2025-06-02 13:09:54.421285 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2025-06-02 13:09:54.421461 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-06-02 13:09:54.422242 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-06-02 13:09:54.422273 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-06-02 13:09:56.727532 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:09:56.727642 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:09:56.727656 | orchestrator | Registering Redlock._release_script 2025-06-02 13:09:56.791769 | orchestrator | 2025-06-02 13:09:56 | INFO  | Task 420ae416-fc39-41cb-a724-a3b41d74d80f (facts) was prepared for execution. 2025-06-02 13:09:56.791908 | orchestrator | 2025-06-02 13:09:56 | INFO  | It takes a moment until task 420ae416-fc39-41cb-a724-a3b41d74d80f (facts) has been started and output is visible here. 2025-06-02 13:10:01.149920 | orchestrator | 2025-06-02 13:10:01.150091 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 13:10:01.150628 | orchestrator | 2025-06-02 13:10:01.152103 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:10:01.152199 | orchestrator | Monday 02 June 2025 13:10:01 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-06-02 13:10:02.143583 | orchestrator | ok: [testbed-manager] 2025-06-02 13:10:02.145624 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:10:02.148592 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:10:02.153296 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:10:02.153365 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:02.153379 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:02.154120 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:02.154929 | orchestrator | 2025-06-02 13:10:02.155905 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:10:02.157018 | orchestrator | Monday 02 June 2025 13:10:02 +0000 (0:00:00.996) 0:00:01.239 *********** 2025-06-02 13:10:02.289039 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:10:02.361192 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:10:02.436339 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:10:02.507814 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:10:02.595229 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:03.297872 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:03.298540 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:03.298786 | orchestrator | 2025-06-02 13:10:03.299107 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:10:03.303770 | orchestrator | 2025-06-02 13:10:03.303815 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:10:03.303846 | orchestrator | Monday 02 June 2025 13:10:03 +0000 (0:00:01.157) 0:00:02.396 *********** 2025-06-02 13:10:08.060798 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:10:08.062001 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:10:08.065057 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:10:08.065148 | orchestrator | ok: [testbed-manager] 2025-06-02 13:10:08.066513 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:08.067762 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:08.068531 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:08.069855 | orchestrator | 2025-06-02 13:10:08.071220 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:10:08.071998 | orchestrator | 2025-06-02 13:10:08.073065 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:10:08.074239 | orchestrator | Monday 02 June 2025 13:10:08 +0000 (0:00:04.762) 0:00:07.158 *********** 2025-06-02 13:10:08.322569 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:10:08.394607 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:10:08.487965 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:10:08.570882 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:10:08.647644 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:08.680293 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:08.681700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:08.683325 | orchestrator | 2025-06-02 13:10:08.684616 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:10:08.685204 | orchestrator | 2025-06-02 13:10:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:10:08.686257 | orchestrator | 2025-06-02 13:10:08 | INFO  | Please wait and do not abort execution. 2025-06-02 13:10:08.687423 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.688502 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.689661 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.690535 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.691203 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.692088 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.692653 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:08.693100 | orchestrator | 2025-06-02 13:10:08.693585 | orchestrator | 2025-06-02 13:10:08.694217 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:10:08.694550 | orchestrator | Monday 02 June 2025 13:10:08 +0000 (0:00:00.618) 0:00:07.777 *********** 2025-06-02 13:10:08.695035 | orchestrator | =============================================================================== 2025-06-02 13:10:08.695505 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2025-06-02 13:10:08.696198 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2025-06-02 13:10:08.696593 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-06-02 13:10:08.696932 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-06-02 13:10:10.760694 | orchestrator | 2025-06-02 13:10:10 | INFO  | Task 7bfc3584-c01f-4297-8cc6-d475e6e8d258 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 13:10:10.760780 | orchestrator | 2025-06-02 13:10:10 | INFO  | It takes a moment until task 7bfc3584-c01f-4297-8cc6-d475e6e8d258 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 13:10:15.265731 | orchestrator | 2025-06-02 13:10:15.266711 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:10:15.267738 | orchestrator | 2025-06-02 13:10:15.268229 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:10:15.268838 | orchestrator | Monday 02 June 2025 13:10:15 +0000 (0:00:00.396) 0:00:00.396 *********** 2025-06-02 13:10:15.486413 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:15.486555 | orchestrator | 2025-06-02 13:10:15.488316 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:10:15.488344 | orchestrator | Monday 02 June 2025 13:10:15 +0000 (0:00:00.223) 0:00:00.620 *********** 2025-06-02 13:10:15.710365 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:15.710595 | orchestrator | 2025-06-02 13:10:15.710755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:15.713008 | orchestrator | Monday 02 June 2025 13:10:15 +0000 (0:00:00.224) 0:00:00.845 *********** 2025-06-02 13:10:16.115095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:10:16.115669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:10:16.116917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:10:16.118371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:10:16.118751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:10:16.119908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:10:16.120526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:10:16.121027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:10:16.121452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 13:10:16.122349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:10:16.122869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:10:16.123443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:10:16.123656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:10:16.124078 | orchestrator | 2025-06-02 13:10:16.124868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:16.124928 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.404) 0:00:01.250 *********** 2025-06-02 13:10:16.535573 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:16.539823 | orchestrator | 2025-06-02 13:10:16.539863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:16.539882 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.421) 0:00:01.671 *********** 2025-06-02 13:10:16.756188 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:16.757443 | orchestrator | 2025-06-02 13:10:16.757604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:16.757972 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.216) 0:00:01.887 *********** 2025-06-02 13:10:16.942221 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:16.946002 | orchestrator | 2025-06-02 13:10:16.946090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:16.946106 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.187) 0:00:02.075 *********** 2025-06-02 13:10:17.128747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.129281 | orchestrator | 2025-06-02 13:10:17.129933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:17.130371 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.188) 0:00:02.263 *********** 2025-06-02 13:10:17.308563 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.310263 | orchestrator | 2025-06-02 13:10:17.314338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:17.314777 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.179) 0:00:02.443 *********** 2025-06-02 13:10:17.483876 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.486965 | orchestrator | 2025-06-02 13:10:17.487561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:17.487980 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.175) 0:00:02.619 *********** 2025-06-02 13:10:17.667687 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.667880 | orchestrator | 2025-06-02 13:10:17.669054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:17.669080 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.183) 0:00:02.802 *********** 2025-06-02 13:10:17.860634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.862310 | orchestrator | 2025-06-02 13:10:17.862862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:17.863370 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.189) 0:00:02.992 *********** 2025-06-02 13:10:18.247202 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa) 2025-06-02 13:10:18.248432 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa) 2025-06-02 13:10:18.248922 | orchestrator | 2025-06-02 13:10:18.249389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:18.251185 | orchestrator | Monday 02 June 2025 13:10:18 +0000 (0:00:00.389) 0:00:03.381 *********** 2025-06-02 13:10:18.674085 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf) 2025-06-02 13:10:18.675562 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf) 2025-06-02 13:10:18.677163 | orchestrator | 2025-06-02 13:10:18.679266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:18.679994 | orchestrator | Monday 02 June 2025 13:10:18 +0000 (0:00:00.423) 0:00:03.805 *********** 2025-06-02 13:10:19.269810 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e) 2025-06-02 13:10:19.270921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e) 2025-06-02 13:10:19.272058 | orchestrator | 2025-06-02 13:10:19.272372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:19.273449 | orchestrator | Monday 02 June 2025 13:10:19 +0000 (0:00:00.597) 0:00:04.402 *********** 2025-06-02 13:10:19.847516 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8) 2025-06-02 13:10:19.848999 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8) 2025-06-02 13:10:19.849032 | orchestrator | 2025-06-02 13:10:19.849044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:19.849102 | orchestrator | Monday 02 June 2025 13:10:19 +0000 (0:00:00.579) 0:00:04.982 *********** 2025-06-02 13:10:20.582878 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:10:20.582974 | orchestrator | 2025-06-02 13:10:20.583081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:20.583544 | orchestrator | Monday 02 June 2025 13:10:20 +0000 (0:00:00.734) 0:00:05.717 *********** 2025-06-02 13:10:20.983648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:10:20.983849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:10:20.984158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:10:20.987190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:10:20.987963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:10:20.988304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:10:20.990223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:10:20.990434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:10:20.990753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 13:10:20.993338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:10:20.994278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:10:20.994698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:10:20.995322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:10:20.995635 | orchestrator | 2025-06-02 13:10:20.997662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:21.000051 | orchestrator | Monday 02 June 2025 13:10:20 +0000 (0:00:00.399) 0:00:06.116 *********** 2025-06-02 13:10:21.187989 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:21.188276 | orchestrator | 2025-06-02 13:10:21.189085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:21.190242 | orchestrator | Monday 02 June 2025 13:10:21 +0000 (0:00:00.205) 0:00:06.322 *********** 2025-06-02 13:10:21.395855 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:21.399713 | orchestrator | 2025-06-02 13:10:21.400642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:21.400914 | orchestrator | Monday 02 June 2025 13:10:21 +0000 (0:00:00.208) 0:00:06.530 *********** 2025-06-02 13:10:21.625712 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:21.626209 | orchestrator | 2025-06-02 13:10:21.627166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:21.627582 | orchestrator | Monday 02 June 2025 13:10:21 +0000 (0:00:00.229) 0:00:06.759 *********** 2025-06-02 13:10:21.835296 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:21.835381 | orchestrator | 2025-06-02 13:10:21.835395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:21.835407 | orchestrator | Monday 02 June 2025 13:10:21 +0000 (0:00:00.207) 0:00:06.967 *********** 2025-06-02 13:10:22.019763 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:22.019844 | orchestrator | 2025-06-02 13:10:22.020218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:22.020508 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:00.185) 0:00:07.152 *********** 2025-06-02 13:10:22.188723 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:22.189146 | orchestrator | 2025-06-02 13:10:22.189476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:22.190746 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:00.171) 0:00:07.324 *********** 2025-06-02 13:10:22.396383 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:22.396503 | orchestrator | 2025-06-02 13:10:22.396556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:22.396638 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:00.206) 0:00:07.530 *********** 2025-06-02 13:10:22.571804 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:22.572957 | orchestrator | 2025-06-02 13:10:22.574339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:22.576794 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:00.175) 0:00:07.705 *********** 2025-06-02 13:10:23.439291 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 13:10:23.440122 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 13:10:23.440957 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 13:10:23.445999 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 13:10:23.446078 | orchestrator | 2025-06-02 13:10:23.446091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:23.446104 | orchestrator | Monday 02 June 2025 13:10:23 +0000 (0:00:00.869) 0:00:08.574 *********** 2025-06-02 13:10:23.622350 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:23.624106 | orchestrator | 2025-06-02 13:10:23.624221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:23.624445 | orchestrator | Monday 02 June 2025 13:10:23 +0000 (0:00:00.180) 0:00:08.755 *********** 2025-06-02 13:10:23.785115 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:23.785195 | orchestrator | 2025-06-02 13:10:23.786312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:23.786337 | orchestrator | Monday 02 June 2025 13:10:23 +0000 (0:00:00.162) 0:00:08.917 *********** 2025-06-02 13:10:23.957112 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:23.957202 | orchestrator | 2025-06-02 13:10:23.957218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:23.958886 | orchestrator | Monday 02 June 2025 13:10:23 +0000 (0:00:00.174) 0:00:09.092 *********** 2025-06-02 13:10:24.132828 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:24.134126 | orchestrator | 2025-06-02 13:10:24.134384 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:10:24.134963 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.174) 0:00:09.267 *********** 2025-06-02 13:10:24.277870 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:10:24.278777 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:10:24.278822 | orchestrator | 2025-06-02 13:10:24.278841 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:10:24.278853 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.145) 0:00:09.413 *********** 2025-06-02 13:10:24.398295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:24.398385 | orchestrator | 2025-06-02 13:10:24.399393 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:10:24.403080 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.117) 0:00:09.531 *********** 2025-06-02 13:10:24.525432 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:24.527064 | orchestrator | 2025-06-02 13:10:24.527983 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:10:24.529751 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.129) 0:00:09.660 *********** 2025-06-02 13:10:24.652966 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:24.653091 | orchestrator | 2025-06-02 13:10:24.653107 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:10:24.653713 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.127) 0:00:09.787 *********** 2025-06-02 13:10:24.795017 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:24.800179 | orchestrator | 2025-06-02 13:10:24.800935 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:10:24.801785 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.139) 0:00:09.927 *********** 2025-06-02 13:10:24.966721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '999978ba-f5e8-5970-b49f-3220d15259a2'}}) 2025-06-02 13:10:24.969411 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}}) 2025-06-02 13:10:24.970954 | orchestrator | 2025-06-02 13:10:24.971986 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:10:24.973672 | orchestrator | Monday 02 June 2025 13:10:24 +0000 (0:00:00.169) 0:00:10.096 *********** 2025-06-02 13:10:25.112983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '999978ba-f5e8-5970-b49f-3220d15259a2'}})  2025-06-02 13:10:25.113740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}})  2025-06-02 13:10:25.115398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:25.115502 | orchestrator | 2025-06-02 13:10:25.116677 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:10:25.117342 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.148) 0:00:10.245 *********** 2025-06-02 13:10:25.407740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '999978ba-f5e8-5970-b49f-3220d15259a2'}})  2025-06-02 13:10:25.407928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}})  2025-06-02 13:10:25.408371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:25.411137 | orchestrator | 2025-06-02 13:10:25.411702 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:10:25.411907 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.296) 0:00:10.541 *********** 2025-06-02 13:10:25.556278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '999978ba-f5e8-5970-b49f-3220d15259a2'}})  2025-06-02 13:10:25.558507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}})  2025-06-02 13:10:25.559597 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:25.561535 | orchestrator | 2025-06-02 13:10:25.562352 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:10:25.563406 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.148) 0:00:10.689 *********** 2025-06-02 13:10:25.695889 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:25.696910 | orchestrator | 2025-06-02 13:10:25.697296 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:10:25.697816 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.139) 0:00:10.829 *********** 2025-06-02 13:10:25.825728 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:25.828567 | orchestrator | 2025-06-02 13:10:25.829533 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:10:25.830770 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.130) 0:00:10.959 *********** 2025-06-02 13:10:25.946099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:25.946265 | orchestrator | 2025-06-02 13:10:25.948290 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:10:25.948655 | orchestrator | Monday 02 June 2025 13:10:25 +0000 (0:00:00.119) 0:00:11.079 *********** 2025-06-02 13:10:26.068127 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:26.069590 | orchestrator | 2025-06-02 13:10:26.072478 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:10:26.073295 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.120) 0:00:11.199 *********** 2025-06-02 13:10:26.227356 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:26.227617 | orchestrator | 2025-06-02 13:10:26.228618 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:10:26.229335 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.161) 0:00:11.361 *********** 2025-06-02 13:10:26.363699 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:10:26.366430 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:26.368011 | orchestrator |  "sdb": { 2025-06-02 13:10:26.368750 | orchestrator |  "osd_lvm_uuid": "999978ba-f5e8-5970-b49f-3220d15259a2" 2025-06-02 13:10:26.369292 | orchestrator |  }, 2025-06-02 13:10:26.370213 | orchestrator |  "sdc": { 2025-06-02 13:10:26.370635 | orchestrator |  "osd_lvm_uuid": "4eaa56f6-1bb5-52f9-9765-bc2816f621f7" 2025-06-02 13:10:26.371706 | orchestrator |  } 2025-06-02 13:10:26.371799 | orchestrator |  } 2025-06-02 13:10:26.372377 | orchestrator | } 2025-06-02 13:10:26.373377 | orchestrator | 2025-06-02 13:10:26.373401 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:10:26.373887 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.136) 0:00:11.497 *********** 2025-06-02 13:10:26.500312 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:26.500408 | orchestrator | 2025-06-02 13:10:26.500425 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:10:26.500571 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.135) 0:00:11.633 *********** 2025-06-02 13:10:26.623339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:26.623959 | orchestrator | 2025-06-02 13:10:26.624153 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:10:26.624821 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.124) 0:00:11.758 *********** 2025-06-02 13:10:26.737666 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:26.738850 | orchestrator | 2025-06-02 13:10:26.739776 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:10:26.742674 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.114) 0:00:11.872 *********** 2025-06-02 13:10:26.915441 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 13:10:26.916319 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:10:26.917107 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:26.919901 | orchestrator |  "sdb": { 2025-06-02 13:10:26.919928 | orchestrator |  "osd_lvm_uuid": "999978ba-f5e8-5970-b49f-3220d15259a2" 2025-06-02 13:10:26.919941 | orchestrator |  }, 2025-06-02 13:10:26.920025 | orchestrator |  "sdc": { 2025-06-02 13:10:26.920746 | orchestrator |  "osd_lvm_uuid": "4eaa56f6-1bb5-52f9-9765-bc2816f621f7" 2025-06-02 13:10:26.921557 | orchestrator |  } 2025-06-02 13:10:26.922681 | orchestrator |  }, 2025-06-02 13:10:26.923002 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:10:26.923502 | orchestrator |  { 2025-06-02 13:10:26.924682 | orchestrator |  "data": "osd-block-999978ba-f5e8-5970-b49f-3220d15259a2", 2025-06-02 13:10:26.925913 | orchestrator |  "data_vg": "ceph-999978ba-f5e8-5970-b49f-3220d15259a2" 2025-06-02 13:10:26.925939 | orchestrator |  }, 2025-06-02 13:10:26.926319 | orchestrator |  { 2025-06-02 13:10:26.926834 | orchestrator |  "data": "osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7", 2025-06-02 13:10:26.927771 | orchestrator |  "data_vg": "ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7" 2025-06-02 13:10:26.928013 | orchestrator |  } 2025-06-02 13:10:26.928762 | orchestrator |  ] 2025-06-02 13:10:26.932143 | orchestrator |  } 2025-06-02 13:10:26.932338 | orchestrator | } 2025-06-02 13:10:26.933036 | orchestrator | 2025-06-02 13:10:26.933308 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:10:26.934123 | orchestrator | Monday 02 June 2025 13:10:26 +0000 (0:00:00.177) 0:00:12.049 *********** 2025-06-02 13:10:28.770227 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:28.771865 | orchestrator | 2025-06-02 13:10:28.773733 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:10:28.774883 | orchestrator | 2025-06-02 13:10:28.777504 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:10:28.778243 | orchestrator | Monday 02 June 2025 13:10:28 +0000 (0:00:01.854) 0:00:13.904 *********** 2025-06-02 13:10:29.015396 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:29.016173 | orchestrator | 2025-06-02 13:10:29.016420 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:10:29.016849 | orchestrator | Monday 02 June 2025 13:10:29 +0000 (0:00:00.245) 0:00:14.149 *********** 2025-06-02 13:10:29.234256 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:29.234353 | orchestrator | 2025-06-02 13:10:29.234368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:29.234381 | orchestrator | Monday 02 June 2025 13:10:29 +0000 (0:00:00.215) 0:00:14.365 *********** 2025-06-02 13:10:29.590191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:10:29.592203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:10:29.592385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:10:29.593927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:10:29.594625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:10:29.595301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:10:29.599647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:10:29.600326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:10:29.600865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 13:10:29.601557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:10:29.602313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:10:29.603917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:10:29.603939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:10:29.603946 | orchestrator | 2025-06-02 13:10:29.604166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:29.604966 | orchestrator | Monday 02 June 2025 13:10:29 +0000 (0:00:00.356) 0:00:14.722 *********** 2025-06-02 13:10:29.839682 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:29.839787 | orchestrator | 2025-06-02 13:10:29.839802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:29.839815 | orchestrator | Monday 02 June 2025 13:10:29 +0000 (0:00:00.250) 0:00:14.973 *********** 2025-06-02 13:10:30.067944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:30.069848 | orchestrator | 2025-06-02 13:10:30.073562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:30.074386 | orchestrator | Monday 02 June 2025 13:10:30 +0000 (0:00:00.225) 0:00:15.199 *********** 2025-06-02 13:10:30.279927 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:30.281137 | orchestrator | 2025-06-02 13:10:30.282516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:30.283535 | orchestrator | Monday 02 June 2025 13:10:30 +0000 (0:00:00.214) 0:00:15.413 *********** 2025-06-02 13:10:30.490998 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:30.491091 | orchestrator | 2025-06-02 13:10:30.491112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:30.492272 | orchestrator | Monday 02 June 2025 13:10:30 +0000 (0:00:00.206) 0:00:15.620 *********** 2025-06-02 13:10:31.150909 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:31.151027 | orchestrator | 2025-06-02 13:10:31.152031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:31.154247 | orchestrator | Monday 02 June 2025 13:10:31 +0000 (0:00:00.661) 0:00:16.282 *********** 2025-06-02 13:10:31.343571 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:31.344264 | orchestrator | 2025-06-02 13:10:31.348672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:31.352385 | orchestrator | Monday 02 June 2025 13:10:31 +0000 (0:00:00.196) 0:00:16.478 *********** 2025-06-02 13:10:31.550892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:31.553906 | orchestrator | 2025-06-02 13:10:31.554558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:31.557366 | orchestrator | Monday 02 June 2025 13:10:31 +0000 (0:00:00.207) 0:00:16.685 *********** 2025-06-02 13:10:31.777211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:31.777867 | orchestrator | 2025-06-02 13:10:31.781012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:31.781041 | orchestrator | Monday 02 June 2025 13:10:31 +0000 (0:00:00.226) 0:00:16.912 *********** 2025-06-02 13:10:32.220622 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959) 2025-06-02 13:10:32.221672 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959) 2025-06-02 13:10:32.222276 | orchestrator | 2025-06-02 13:10:32.223817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:32.224883 | orchestrator | Monday 02 June 2025 13:10:32 +0000 (0:00:00.439) 0:00:17.351 *********** 2025-06-02 13:10:32.657182 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7) 2025-06-02 13:10:32.657661 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7) 2025-06-02 13:10:32.658991 | orchestrator | 2025-06-02 13:10:32.661398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:32.661721 | orchestrator | Monday 02 June 2025 13:10:32 +0000 (0:00:00.439) 0:00:17.791 *********** 2025-06-02 13:10:33.116575 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b) 2025-06-02 13:10:33.116686 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b) 2025-06-02 13:10:33.116857 | orchestrator | 2025-06-02 13:10:33.116879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:33.117376 | orchestrator | Monday 02 June 2025 13:10:33 +0000 (0:00:00.456) 0:00:18.248 *********** 2025-06-02 13:10:33.694091 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857) 2025-06-02 13:10:33.694226 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857) 2025-06-02 13:10:33.694362 | orchestrator | 2025-06-02 13:10:33.697201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:33.698244 | orchestrator | Monday 02 June 2025 13:10:33 +0000 (0:00:00.580) 0:00:18.828 *********** 2025-06-02 13:10:34.063501 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:10:34.064863 | orchestrator | 2025-06-02 13:10:34.065292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:34.065627 | orchestrator | Monday 02 June 2025 13:10:34 +0000 (0:00:00.365) 0:00:19.193 *********** 2025-06-02 13:10:34.558331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:10:34.558999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:10:34.560117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:10:34.562158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:10:34.562835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:10:34.563764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:10:34.563957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:10:34.567311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:10:34.568021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 13:10:34.568589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:10:34.569409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:10:34.570400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:10:34.570534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:10:34.570877 | orchestrator | 2025-06-02 13:10:34.571533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:34.571976 | orchestrator | Monday 02 June 2025 13:10:34 +0000 (0:00:00.497) 0:00:19.691 *********** 2025-06-02 13:10:34.779600 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:34.781233 | orchestrator | 2025-06-02 13:10:34.783771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:34.783857 | orchestrator | Monday 02 June 2025 13:10:34 +0000 (0:00:00.218) 0:00:19.909 *********** 2025-06-02 13:10:35.564591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:35.565221 | orchestrator | 2025-06-02 13:10:35.565837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:35.566237 | orchestrator | Monday 02 June 2025 13:10:35 +0000 (0:00:00.784) 0:00:20.694 *********** 2025-06-02 13:10:35.775847 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:35.777348 | orchestrator | 2025-06-02 13:10:35.781171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:35.781331 | orchestrator | Monday 02 June 2025 13:10:35 +0000 (0:00:00.213) 0:00:20.907 *********** 2025-06-02 13:10:35.983878 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:35.983984 | orchestrator | 2025-06-02 13:10:35.985188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:35.986371 | orchestrator | Monday 02 June 2025 13:10:35 +0000 (0:00:00.208) 0:00:21.116 *********** 2025-06-02 13:10:36.199046 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:36.199523 | orchestrator | 2025-06-02 13:10:36.200575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:36.201734 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:00.215) 0:00:21.332 *********** 2025-06-02 13:10:36.407167 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:36.408863 | orchestrator | 2025-06-02 13:10:36.410237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:36.410991 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:00.209) 0:00:21.541 *********** 2025-06-02 13:10:36.619133 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:36.620018 | orchestrator | 2025-06-02 13:10:36.621233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:36.622441 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:00.210) 0:00:21.752 *********** 2025-06-02 13:10:36.823215 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:36.826202 | orchestrator | 2025-06-02 13:10:36.829847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:36.829890 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:00.203) 0:00:21.956 *********** 2025-06-02 13:10:37.471388 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 13:10:37.471795 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 13:10:37.472708 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 13:10:37.473614 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 13:10:37.474207 | orchestrator | 2025-06-02 13:10:37.474935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:37.475439 | orchestrator | Monday 02 June 2025 13:10:37 +0000 (0:00:00.647) 0:00:22.603 *********** 2025-06-02 13:10:37.685933 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:37.687437 | orchestrator | 2025-06-02 13:10:37.688826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:37.689749 | orchestrator | Monday 02 June 2025 13:10:37 +0000 (0:00:00.215) 0:00:22.819 *********** 2025-06-02 13:10:37.901508 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:37.902949 | orchestrator | 2025-06-02 13:10:37.905208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:37.906633 | orchestrator | Monday 02 June 2025 13:10:37 +0000 (0:00:00.213) 0:00:23.033 *********** 2025-06-02 13:10:38.102849 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:38.103421 | orchestrator | 2025-06-02 13:10:38.105641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:38.106291 | orchestrator | Monday 02 June 2025 13:10:38 +0000 (0:00:00.201) 0:00:23.235 *********** 2025-06-02 13:10:38.323792 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:38.326117 | orchestrator | 2025-06-02 13:10:38.328691 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:10:38.329412 | orchestrator | Monday 02 June 2025 13:10:38 +0000 (0:00:00.221) 0:00:23.457 *********** 2025-06-02 13:10:38.693812 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:10:38.694203 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:10:38.695927 | orchestrator | 2025-06-02 13:10:38.696850 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:10:38.702329 | orchestrator | Monday 02 June 2025 13:10:38 +0000 (0:00:00.367) 0:00:23.824 *********** 2025-06-02 13:10:38.822856 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:38.825189 | orchestrator | 2025-06-02 13:10:38.825228 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:10:38.825242 | orchestrator | Monday 02 June 2025 13:10:38 +0000 (0:00:00.131) 0:00:23.956 *********** 2025-06-02 13:10:38.956837 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:38.957196 | orchestrator | 2025-06-02 13:10:38.958535 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:10:38.960272 | orchestrator | Monday 02 June 2025 13:10:38 +0000 (0:00:00.134) 0:00:24.091 *********** 2025-06-02 13:10:39.113513 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:39.114134 | orchestrator | 2025-06-02 13:10:39.115105 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:10:39.117260 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.153) 0:00:24.245 *********** 2025-06-02 13:10:39.261976 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:39.262135 | orchestrator | 2025-06-02 13:10:39.262232 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:10:39.262379 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.149) 0:00:24.395 *********** 2025-06-02 13:10:39.419953 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}}) 2025-06-02 13:10:39.420053 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf0c471-2dcf-5556-af63-e058f1325c4d'}}) 2025-06-02 13:10:39.420168 | orchestrator | 2025-06-02 13:10:39.420247 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:10:39.420744 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.156) 0:00:24.552 *********** 2025-06-02 13:10:39.582849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}})  2025-06-02 13:10:39.584075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf0c471-2dcf-5556-af63-e058f1325c4d'}})  2025-06-02 13:10:39.584177 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:39.584589 | orchestrator | 2025-06-02 13:10:39.584954 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:10:39.585300 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.163) 0:00:24.715 *********** 2025-06-02 13:10:39.759110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}})  2025-06-02 13:10:39.759311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf0c471-2dcf-5556-af63-e058f1325c4d'}})  2025-06-02 13:10:39.760063 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:39.760833 | orchestrator | 2025-06-02 13:10:39.761498 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:10:39.761774 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.177) 0:00:24.893 *********** 2025-06-02 13:10:39.922975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}})  2025-06-02 13:10:39.923729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf0c471-2dcf-5556-af63-e058f1325c4d'}})  2025-06-02 13:10:39.925257 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:39.926338 | orchestrator | 2025-06-02 13:10:39.927154 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:10:39.930720 | orchestrator | Monday 02 June 2025 13:10:39 +0000 (0:00:00.163) 0:00:25.057 *********** 2025-06-02 13:10:40.088252 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:40.090218 | orchestrator | 2025-06-02 13:10:40.093533 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:10:40.102748 | orchestrator | Monday 02 June 2025 13:10:40 +0000 (0:00:00.164) 0:00:25.221 *********** 2025-06-02 13:10:40.251513 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:40.251621 | orchestrator | 2025-06-02 13:10:40.252703 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:10:40.253926 | orchestrator | Monday 02 June 2025 13:10:40 +0000 (0:00:00.161) 0:00:25.383 *********** 2025-06-02 13:10:40.399032 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:40.400700 | orchestrator | 2025-06-02 13:10:40.402577 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:10:40.404360 | orchestrator | Monday 02 June 2025 13:10:40 +0000 (0:00:00.149) 0:00:25.532 *********** 2025-06-02 13:10:40.833401 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:40.835084 | orchestrator | 2025-06-02 13:10:40.835674 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:10:40.836979 | orchestrator | Monday 02 June 2025 13:10:40 +0000 (0:00:00.433) 0:00:25.966 *********** 2025-06-02 13:10:40.978910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:40.980662 | orchestrator | 2025-06-02 13:10:40.981632 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:10:40.986005 | orchestrator | Monday 02 June 2025 13:10:40 +0000 (0:00:00.145) 0:00:26.111 *********** 2025-06-02 13:10:41.123195 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:10:41.128535 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:41.128584 | orchestrator |  "sdb": { 2025-06-02 13:10:41.132986 | orchestrator |  "osd_lvm_uuid": "a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10" 2025-06-02 13:10:41.133667 | orchestrator |  }, 2025-06-02 13:10:41.134938 | orchestrator |  "sdc": { 2025-06-02 13:10:41.138247 | orchestrator |  "osd_lvm_uuid": "bbf0c471-2dcf-5556-af63-e058f1325c4d" 2025-06-02 13:10:41.138681 | orchestrator |  } 2025-06-02 13:10:41.141102 | orchestrator |  } 2025-06-02 13:10:41.142155 | orchestrator | } 2025-06-02 13:10:41.142247 | orchestrator | 2025-06-02 13:10:41.146807 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:10:41.146990 | orchestrator | Monday 02 June 2025 13:10:41 +0000 (0:00:00.145) 0:00:26.257 *********** 2025-06-02 13:10:41.273264 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:41.276127 | orchestrator | 2025-06-02 13:10:41.276164 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:10:41.276178 | orchestrator | Monday 02 June 2025 13:10:41 +0000 (0:00:00.148) 0:00:26.406 *********** 2025-06-02 13:10:41.419070 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:41.420964 | orchestrator | 2025-06-02 13:10:41.422175 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:10:41.423958 | orchestrator | Monday 02 June 2025 13:10:41 +0000 (0:00:00.145) 0:00:26.551 *********** 2025-06-02 13:10:41.573991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:41.577593 | orchestrator | 2025-06-02 13:10:41.583094 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:10:41.583163 | orchestrator | Monday 02 June 2025 13:10:41 +0000 (0:00:00.155) 0:00:26.707 *********** 2025-06-02 13:10:41.787887 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 13:10:41.790168 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:10:41.791616 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:41.794541 | orchestrator |  "sdb": { 2025-06-02 13:10:41.795168 | orchestrator |  "osd_lvm_uuid": "a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10" 2025-06-02 13:10:41.796422 | orchestrator |  }, 2025-06-02 13:10:41.797719 | orchestrator |  "sdc": { 2025-06-02 13:10:41.800227 | orchestrator |  "osd_lvm_uuid": "bbf0c471-2dcf-5556-af63-e058f1325c4d" 2025-06-02 13:10:41.800574 | orchestrator |  } 2025-06-02 13:10:41.802306 | orchestrator |  }, 2025-06-02 13:10:41.804090 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:10:41.806282 | orchestrator |  { 2025-06-02 13:10:41.807180 | orchestrator |  "data": "osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10", 2025-06-02 13:10:41.809211 | orchestrator |  "data_vg": "ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10" 2025-06-02 13:10:41.812021 | orchestrator |  }, 2025-06-02 13:10:41.812048 | orchestrator |  { 2025-06-02 13:10:41.813084 | orchestrator |  "data": "osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d", 2025-06-02 13:10:41.814105 | orchestrator |  "data_vg": "ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d" 2025-06-02 13:10:41.815329 | orchestrator |  } 2025-06-02 13:10:41.816091 | orchestrator |  ] 2025-06-02 13:10:41.817506 | orchestrator |  } 2025-06-02 13:10:41.818214 | orchestrator | } 2025-06-02 13:10:41.819067 | orchestrator | 2025-06-02 13:10:41.820069 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:10:41.824049 | orchestrator | Monday 02 June 2025 13:10:41 +0000 (0:00:00.213) 0:00:26.921 *********** 2025-06-02 13:10:42.867390 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:42.868294 | orchestrator | 2025-06-02 13:10:42.869435 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:10:42.870495 | orchestrator | 2025-06-02 13:10:42.874809 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:10:42.875493 | orchestrator | Monday 02 June 2025 13:10:42 +0000 (0:00:01.080) 0:00:28.001 *********** 2025-06-02 13:10:43.364973 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:43.366372 | orchestrator | 2025-06-02 13:10:43.367756 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:10:43.370774 | orchestrator | Monday 02 June 2025 13:10:43 +0000 (0:00:00.496) 0:00:28.497 *********** 2025-06-02 13:10:44.072910 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:44.073715 | orchestrator | 2025-06-02 13:10:44.075199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.076214 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.708) 0:00:29.206 *********** 2025-06-02 13:10:44.460215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:10:44.461369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:10:44.463160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:10:44.465224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:10:44.465298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:10:44.466589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:10:44.467175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:10:44.468349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:10:44.469196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 13:10:44.470128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:10:44.471264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:10:44.472162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:10:44.473105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:10:44.473718 | orchestrator | 2025-06-02 13:10:44.474418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.475420 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.385) 0:00:29.591 *********** 2025-06-02 13:10:44.663554 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:44.663639 | orchestrator | 2025-06-02 13:10:44.663654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.664948 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.203) 0:00:29.795 *********** 2025-06-02 13:10:44.854200 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:44.854529 | orchestrator | 2025-06-02 13:10:44.855755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.856549 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.192) 0:00:29.987 *********** 2025-06-02 13:10:45.035752 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.035823 | orchestrator | 2025-06-02 13:10:45.037007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.037794 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.182) 0:00:30.169 *********** 2025-06-02 13:10:45.230216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.233115 | orchestrator | 2025-06-02 13:10:45.233248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.233541 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.191) 0:00:30.361 *********** 2025-06-02 13:10:45.393577 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.394151 | orchestrator | 2025-06-02 13:10:45.394429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.394858 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.167) 0:00:30.529 *********** 2025-06-02 13:10:45.562177 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.562327 | orchestrator | 2025-06-02 13:10:45.562346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.562720 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.166) 0:00:30.695 *********** 2025-06-02 13:10:45.770218 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.771532 | orchestrator | 2025-06-02 13:10:45.775677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.776623 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.208) 0:00:30.904 *********** 2025-06-02 13:10:45.975396 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:45.976722 | orchestrator | 2025-06-02 13:10:45.978120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.980749 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.204) 0:00:31.109 *********** 2025-06-02 13:10:46.544193 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7) 2025-06-02 13:10:46.544281 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7) 2025-06-02 13:10:46.544296 | orchestrator | 2025-06-02 13:10:46.544597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:46.545074 | orchestrator | Monday 02 June 2025 13:10:46 +0000 (0:00:00.565) 0:00:31.674 *********** 2025-06-02 13:10:47.185927 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb) 2025-06-02 13:10:47.185986 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb) 2025-06-02 13:10:47.186263 | orchestrator | 2025-06-02 13:10:47.186708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:47.187317 | orchestrator | Monday 02 June 2025 13:10:47 +0000 (0:00:00.646) 0:00:32.320 *********** 2025-06-02 13:10:47.590653 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000) 2025-06-02 13:10:47.592926 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000) 2025-06-02 13:10:47.593855 | orchestrator | 2025-06-02 13:10:47.594124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:47.594592 | orchestrator | Monday 02 June 2025 13:10:47 +0000 (0:00:00.403) 0:00:32.723 *********** 2025-06-02 13:10:47.978583 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0) 2025-06-02 13:10:47.979600 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0) 2025-06-02 13:10:47.980377 | orchestrator | 2025-06-02 13:10:47.984032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:47.986981 | orchestrator | Monday 02 June 2025 13:10:47 +0000 (0:00:00.388) 0:00:33.112 *********** 2025-06-02 13:10:48.309799 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:10:48.310296 | orchestrator | 2025-06-02 13:10:48.310872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.311895 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.330) 0:00:33.443 *********** 2025-06-02 13:10:48.671594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:10:48.678296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:10:48.679552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:10:48.682081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:10:48.683586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:10:48.684820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:10:48.686106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:10:48.687328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:10:48.688631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 13:10:48.689926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:10:48.690834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:10:48.691926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:10:48.692855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:10:48.696106 | orchestrator | 2025-06-02 13:10:48.697315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.697729 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.362) 0:00:33.805 *********** 2025-06-02 13:10:48.872474 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:48.873178 | orchestrator | 2025-06-02 13:10:48.873964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.875851 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.201) 0:00:34.006 *********** 2025-06-02 13:10:49.062917 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:49.063295 | orchestrator | 2025-06-02 13:10:49.063602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.065102 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.190) 0:00:34.197 *********** 2025-06-02 13:10:49.241665 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:49.243333 | orchestrator | 2025-06-02 13:10:49.244988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.245011 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.179) 0:00:34.376 *********** 2025-06-02 13:10:49.431947 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:49.433115 | orchestrator | 2025-06-02 13:10:49.435374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.435801 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.190) 0:00:34.566 *********** 2025-06-02 13:10:49.618337 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:49.619315 | orchestrator | 2025-06-02 13:10:49.620535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.621236 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.186) 0:00:34.752 *********** 2025-06-02 13:10:50.175761 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:50.175951 | orchestrator | 2025-06-02 13:10:50.177495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:50.178391 | orchestrator | Monday 02 June 2025 13:10:50 +0000 (0:00:00.556) 0:00:35.309 *********** 2025-06-02 13:10:50.375908 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:50.376363 | orchestrator | 2025-06-02 13:10:50.377148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:50.377995 | orchestrator | Monday 02 June 2025 13:10:50 +0000 (0:00:00.198) 0:00:35.508 *********** 2025-06-02 13:10:50.570248 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:50.570945 | orchestrator | 2025-06-02 13:10:50.571602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:50.572522 | orchestrator | Monday 02 June 2025 13:10:50 +0000 (0:00:00.196) 0:00:35.704 *********** 2025-06-02 13:10:51.170538 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 13:10:51.171699 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 13:10:51.172308 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 13:10:51.173767 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 13:10:51.173868 | orchestrator | 2025-06-02 13:10:51.173943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.174383 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.600) 0:00:36.304 *********** 2025-06-02 13:10:51.386544 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:51.388181 | orchestrator | 2025-06-02 13:10:51.388222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.388700 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.213) 0:00:36.518 *********** 2025-06-02 13:10:51.590591 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:51.590906 | orchestrator | 2025-06-02 13:10:51.591618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.592787 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.206) 0:00:36.724 *********** 2025-06-02 13:10:51.784652 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:51.784856 | orchestrator | 2025-06-02 13:10:51.787054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.787730 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.193) 0:00:36.918 *********** 2025-06-02 13:10:51.968846 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:51.969615 | orchestrator | 2025-06-02 13:10:51.970702 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:10:51.970959 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.184) 0:00:37.103 *********** 2025-06-02 13:10:52.160262 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:10:52.160682 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:10:52.161498 | orchestrator | 2025-06-02 13:10:52.162295 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:10:52.162617 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.191) 0:00:37.294 *********** 2025-06-02 13:10:52.281252 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:52.282682 | orchestrator | 2025-06-02 13:10:52.283798 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:10:52.283826 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.121) 0:00:37.415 *********** 2025-06-02 13:10:52.407792 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:52.408135 | orchestrator | 2025-06-02 13:10:52.408601 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:10:52.409227 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.125) 0:00:37.541 *********** 2025-06-02 13:10:52.536254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:52.536643 | orchestrator | 2025-06-02 13:10:52.537331 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:10:52.538077 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.129) 0:00:37.670 *********** 2025-06-02 13:10:52.808658 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:52.809062 | orchestrator | 2025-06-02 13:10:52.810497 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:10:52.811345 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.272) 0:00:37.943 *********** 2025-06-02 13:10:52.967763 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}}) 2025-06-02 13:10:52.968512 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c542c38e-2fd0-548c-8c9f-0ca498087064'}}) 2025-06-02 13:10:52.969590 | orchestrator | 2025-06-02 13:10:52.969933 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:10:52.970619 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.158) 0:00:38.101 *********** 2025-06-02 13:10:53.119508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}})  2025-06-02 13:10:53.119969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c542c38e-2fd0-548c-8c9f-0ca498087064'}})  2025-06-02 13:10:53.120505 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:53.121593 | orchestrator | 2025-06-02 13:10:53.121771 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:10:53.122386 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.151) 0:00:38.253 *********** 2025-06-02 13:10:53.269263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}})  2025-06-02 13:10:53.269479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c542c38e-2fd0-548c-8c9f-0ca498087064'}})  2025-06-02 13:10:53.269495 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:53.269501 | orchestrator | 2025-06-02 13:10:53.269507 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:10:53.269925 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.150) 0:00:38.403 *********** 2025-06-02 13:10:53.411851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}})  2025-06-02 13:10:53.412112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c542c38e-2fd0-548c-8c9f-0ca498087064'}})  2025-06-02 13:10:53.412940 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:53.413257 | orchestrator | 2025-06-02 13:10:53.413941 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:10:53.414539 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.141) 0:00:38.544 *********** 2025-06-02 13:10:53.536286 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:53.536741 | orchestrator | 2025-06-02 13:10:53.537466 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:10:53.538276 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.125) 0:00:38.670 *********** 2025-06-02 13:10:53.669331 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:53.669383 | orchestrator | 2025-06-02 13:10:53.670174 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:10:53.670748 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.132) 0:00:38.802 *********** 2025-06-02 13:10:53.793924 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:53.794590 | orchestrator | 2025-06-02 13:10:53.795369 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:10:53.795918 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.126) 0:00:38.928 *********** 2025-06-02 13:10:53.921251 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:53.922265 | orchestrator | 2025-06-02 13:10:53.923657 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:10:53.924613 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.126) 0:00:39.055 *********** 2025-06-02 13:10:54.045708 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:54.047097 | orchestrator | 2025-06-02 13:10:54.047399 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:10:54.048081 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.122) 0:00:39.177 *********** 2025-06-02 13:10:54.174790 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:10:54.175729 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:54.177363 | orchestrator |  "sdb": { 2025-06-02 13:10:54.178921 | orchestrator |  "osd_lvm_uuid": "1475bed6-7ba6-5e8e-8ce2-217cc0c6359d" 2025-06-02 13:10:54.179512 | orchestrator |  }, 2025-06-02 13:10:54.180058 | orchestrator |  "sdc": { 2025-06-02 13:10:54.180546 | orchestrator |  "osd_lvm_uuid": "c542c38e-2fd0-548c-8c9f-0ca498087064" 2025-06-02 13:10:54.181162 | orchestrator |  } 2025-06-02 13:10:54.181647 | orchestrator |  } 2025-06-02 13:10:54.182231 | orchestrator | } 2025-06-02 13:10:54.182655 | orchestrator | 2025-06-02 13:10:54.183071 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:10:54.183519 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.130) 0:00:39.308 *********** 2025-06-02 13:10:54.303639 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:54.304112 | orchestrator | 2025-06-02 13:10:54.304995 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:10:54.305628 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.128) 0:00:39.436 *********** 2025-06-02 13:10:54.556216 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:54.557027 | orchestrator | 2025-06-02 13:10:54.557951 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:10:54.558517 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.252) 0:00:39.688 *********** 2025-06-02 13:10:54.676326 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:54.677184 | orchestrator | 2025-06-02 13:10:54.677955 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:10:54.678736 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.120) 0:00:39.809 *********** 2025-06-02 13:10:54.878404 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 13:10:54.879557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:10:54.880618 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:54.881320 | orchestrator |  "sdb": { 2025-06-02 13:10:54.881715 | orchestrator |  "osd_lvm_uuid": "1475bed6-7ba6-5e8e-8ce2-217cc0c6359d" 2025-06-02 13:10:54.882669 | orchestrator |  }, 2025-06-02 13:10:54.883265 | orchestrator |  "sdc": { 2025-06-02 13:10:54.883992 | orchestrator |  "osd_lvm_uuid": "c542c38e-2fd0-548c-8c9f-0ca498087064" 2025-06-02 13:10:54.885092 | orchestrator |  } 2025-06-02 13:10:54.887425 | orchestrator |  }, 2025-06-02 13:10:54.888030 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:10:54.888649 | orchestrator |  { 2025-06-02 13:10:54.889191 | orchestrator |  "data": "osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d", 2025-06-02 13:10:54.889430 | orchestrator |  "data_vg": "ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d" 2025-06-02 13:10:54.889861 | orchestrator |  }, 2025-06-02 13:10:54.890525 | orchestrator |  { 2025-06-02 13:10:54.891063 | orchestrator |  "data": "osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064", 2025-06-02 13:10:54.891375 | orchestrator |  "data_vg": "ceph-c542c38e-2fd0-548c-8c9f-0ca498087064" 2025-06-02 13:10:54.891843 | orchestrator |  } 2025-06-02 13:10:54.892215 | orchestrator |  ] 2025-06-02 13:10:54.892624 | orchestrator |  } 2025-06-02 13:10:54.893011 | orchestrator | } 2025-06-02 13:10:54.893357 | orchestrator | 2025-06-02 13:10:54.893777 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:10:54.894224 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.202) 0:00:40.012 *********** 2025-06-02 13:10:55.776888 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:55.778259 | orchestrator | 2025-06-02 13:10:55.780704 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:10:55.780802 | orchestrator | 2025-06-02 13:10:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:10:55.780829 | orchestrator | 2025-06-02 13:10:55 | INFO  | Please wait and do not abort execution. 2025-06-02 13:10:55.781311 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:10:55.782493 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:10:55.783341 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:10:55.784712 | orchestrator | 2025-06-02 13:10:55.785985 | orchestrator | 2025-06-02 13:10:55.788091 | orchestrator | 2025-06-02 13:10:55.788202 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:10:55.789109 | orchestrator | Monday 02 June 2025 13:10:55 +0000 (0:00:00.897) 0:00:40.909 *********** 2025-06-02 13:10:55.790782 | orchestrator | =============================================================================== 2025-06-02 13:10:55.791496 | orchestrator | Write configuration file ------------------------------------------------ 3.83s 2025-06-02 13:10:55.791857 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2025-06-02 13:10:55.792484 | orchestrator | Get initial list of available block devices ----------------------------- 1.15s 2025-06-02 13:10:55.793219 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-06-02 13:10:55.793642 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2025-06-02 13:10:55.794089 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-06-02 13:10:55.794854 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-06-02 13:10:55.795344 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-02 13:10:55.795857 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.70s 2025-06-02 13:10:55.796307 | orchestrator | Set WAL devices config data --------------------------------------------- 0.68s 2025-06-02 13:10:55.796755 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-02 13:10:55.797161 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-06-02 13:10:55.797587 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-06-02 13:10:55.798083 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2025-06-02 13:10:55.798584 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-06-02 13:10:55.798942 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-06-02 13:10:55.799335 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-06-02 13:10:55.800554 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-02 13:10:55.800957 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-02 13:10:55.801256 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-06-02 13:11:07.810130 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:11:07.810192 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:11:07.810205 | orchestrator | Registering Redlock._release_script 2025-06-02 13:11:07.859855 | orchestrator | 2025-06-02 13:11:07 | INFO  | Task 90022827-09c8-48ad-984f-1b30edbf2fc4 (sync inventory) is running in background. Output coming soon. 2025-06-02 14:11:10.510638 | orchestrator | 2025-06-02 14:11:10 | INFO  | Task 2731e9c7-9a93-4010-81ed-ce335d2880d8 (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 14:11:10.510787 | orchestrator | 2025-06-02 14:11:10 | INFO  | It takes a moment until task 2731e9c7-9a93-4010-81ed-ce335d2880d8 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 14:11:14.586749 | orchestrator | 2025-06-02 14:11:14.588069 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 14:11:14.592158 | orchestrator | 2025-06-02 14:11:14.592264 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 14:11:14.592938 | orchestrator | Monday 02 June 2025 14:11:14 +0000 (0:00:00.304) 0:00:00.304 *********** 2025-06-02 14:11:14.855885 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 14:11:14.856453 | orchestrator | 2025-06-02 14:11:14.857667 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 14:11:14.858961 | orchestrator | Monday 02 June 2025 14:11:14 +0000 (0:00:00.274) 0:00:00.578 *********** 2025-06-02 14:11:15.082547 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:15.083400 | orchestrator | 2025-06-02 14:11:15.084113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:15.084977 | orchestrator | Monday 02 June 2025 14:11:15 +0000 (0:00:00.226) 0:00:00.805 *********** 2025-06-02 14:11:15.472811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 14:11:15.472938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 14:11:15.472951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 14:11:15.473558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 14:11:15.474211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 14:11:15.474412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 14:11:15.475078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 14:11:15.476314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 14:11:15.476440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 14:11:15.476455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 14:11:15.477028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 14:11:15.477304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 14:11:15.477662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 14:11:15.478526 | orchestrator | 2025-06-02 14:11:15.478694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:15.478993 | orchestrator | Monday 02 June 2025 14:11:15 +0000 (0:00:00.391) 0:00:01.196 *********** 2025-06-02 14:11:15.925413 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:15.925959 | orchestrator | 2025-06-02 14:11:15.925992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:15.926006 | orchestrator | Monday 02 June 2025 14:11:15 +0000 (0:00:00.450) 0:00:01.647 *********** 2025-06-02 14:11:16.132352 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:16.132698 | orchestrator | 2025-06-02 14:11:16.133447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:16.134230 | orchestrator | Monday 02 June 2025 14:11:16 +0000 (0:00:00.208) 0:00:01.855 *********** 2025-06-02 14:11:16.383690 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:16.383823 | orchestrator | 2025-06-02 14:11:16.383899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:16.383974 | orchestrator | Monday 02 June 2025 14:11:16 +0000 (0:00:00.249) 0:00:02.104 *********** 2025-06-02 14:11:16.607488 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:16.607639 | orchestrator | 2025-06-02 14:11:16.607915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:16.608137 | orchestrator | Monday 02 June 2025 14:11:16 +0000 (0:00:00.227) 0:00:02.332 *********** 2025-06-02 14:11:16.798269 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:16.800170 | orchestrator | 2025-06-02 14:11:16.800200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:16.800213 | orchestrator | Monday 02 June 2025 14:11:16 +0000 (0:00:00.188) 0:00:02.520 *********** 2025-06-02 14:11:16.995122 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:16.995595 | orchestrator | 2025-06-02 14:11:16.995972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:16.997193 | orchestrator | Monday 02 June 2025 14:11:16 +0000 (0:00:00.197) 0:00:02.718 *********** 2025-06-02 14:11:17.210438 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:17.210541 | orchestrator | 2025-06-02 14:11:17.211305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:17.211978 | orchestrator | Monday 02 June 2025 14:11:17 +0000 (0:00:00.215) 0:00:02.933 *********** 2025-06-02 14:11:17.406002 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:17.407168 | orchestrator | 2025-06-02 14:11:17.407757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:17.408290 | orchestrator | Monday 02 June 2025 14:11:17 +0000 (0:00:00.196) 0:00:03.129 *********** 2025-06-02 14:11:17.838312 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa) 2025-06-02 14:11:17.838714 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa) 2025-06-02 14:11:17.839880 | orchestrator | 2025-06-02 14:11:17.841450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:17.842175 | orchestrator | Monday 02 June 2025 14:11:17 +0000 (0:00:00.430) 0:00:03.559 *********** 2025-06-02 14:11:18.241023 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf) 2025-06-02 14:11:18.241591 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf) 2025-06-02 14:11:18.243196 | orchestrator | 2025-06-02 14:11:18.244334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:18.245508 | orchestrator | Monday 02 June 2025 14:11:18 +0000 (0:00:00.404) 0:00:03.964 *********** 2025-06-02 14:11:18.976596 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e) 2025-06-02 14:11:18.977140 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e) 2025-06-02 14:11:18.978148 | orchestrator | 2025-06-02 14:11:18.979096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:18.980100 | orchestrator | Monday 02 June 2025 14:11:18 +0000 (0:00:00.732) 0:00:04.696 *********** 2025-06-02 14:11:19.662456 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8) 2025-06-02 14:11:19.663183 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8) 2025-06-02 14:11:19.664097 | orchestrator | 2025-06-02 14:11:19.664985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:19.665735 | orchestrator | Monday 02 June 2025 14:11:19 +0000 (0:00:00.689) 0:00:05.385 *********** 2025-06-02 14:11:20.474725 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 14:11:20.476276 | orchestrator | 2025-06-02 14:11:20.478113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:20.478994 | orchestrator | Monday 02 June 2025 14:11:20 +0000 (0:00:00.809) 0:00:06.196 *********** 2025-06-02 14:11:20.895881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 14:11:20.899062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 14:11:20.900881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 14:11:20.901879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 14:11:20.902528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 14:11:20.903226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 14:11:20.903937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 14:11:20.904308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 14:11:20.905336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 14:11:20.905418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 14:11:20.905515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 14:11:20.906079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 14:11:20.906994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 14:11:20.908323 | orchestrator | 2025-06-02 14:11:20.909070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:20.909798 | orchestrator | Monday 02 June 2025 14:11:20 +0000 (0:00:00.417) 0:00:06.614 *********** 2025-06-02 14:11:21.102931 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:21.103038 | orchestrator | 2025-06-02 14:11:21.103413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:21.104084 | orchestrator | Monday 02 June 2025 14:11:21 +0000 (0:00:00.212) 0:00:06.826 *********** 2025-06-02 14:11:21.306305 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:21.306522 | orchestrator | 2025-06-02 14:11:21.307478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:21.308240 | orchestrator | Monday 02 June 2025 14:11:21 +0000 (0:00:00.203) 0:00:07.029 *********** 2025-06-02 14:11:21.518008 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:21.518348 | orchestrator | 2025-06-02 14:11:21.519246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:21.519898 | orchestrator | Monday 02 June 2025 14:11:21 +0000 (0:00:00.212) 0:00:07.241 *********** 2025-06-02 14:11:21.710894 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:21.711444 | orchestrator | 2025-06-02 14:11:21.712057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:21.712959 | orchestrator | Monday 02 June 2025 14:11:21 +0000 (0:00:00.192) 0:00:07.434 *********** 2025-06-02 14:11:21.915561 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:21.915788 | orchestrator | 2025-06-02 14:11:21.916627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:21.917896 | orchestrator | Monday 02 June 2025 14:11:21 +0000 (0:00:00.203) 0:00:07.637 *********** 2025-06-02 14:11:22.123899 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:22.124004 | orchestrator | 2025-06-02 14:11:22.124659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:22.125576 | orchestrator | Monday 02 June 2025 14:11:22 +0000 (0:00:00.209) 0:00:07.847 *********** 2025-06-02 14:11:22.317759 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:22.318062 | orchestrator | 2025-06-02 14:11:22.319285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:22.319626 | orchestrator | Monday 02 June 2025 14:11:22 +0000 (0:00:00.193) 0:00:08.041 *********** 2025-06-02 14:11:22.517569 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:22.518108 | orchestrator | 2025-06-02 14:11:22.518937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:22.519470 | orchestrator | Monday 02 June 2025 14:11:22 +0000 (0:00:00.199) 0:00:08.241 *********** 2025-06-02 14:11:23.623012 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 14:11:23.623369 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 14:11:23.624559 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 14:11:23.625401 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 14:11:23.626140 | orchestrator | 2025-06-02 14:11:23.626814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:23.630151 | orchestrator | Monday 02 June 2025 14:11:23 +0000 (0:00:01.105) 0:00:09.346 *********** 2025-06-02 14:11:23.823552 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:23.824363 | orchestrator | 2025-06-02 14:11:23.824751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:23.826109 | orchestrator | Monday 02 June 2025 14:11:23 +0000 (0:00:00.199) 0:00:09.546 *********** 2025-06-02 14:11:24.019728 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:24.020395 | orchestrator | 2025-06-02 14:11:24.021324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:24.021941 | orchestrator | Monday 02 June 2025 14:11:24 +0000 (0:00:00.196) 0:00:09.743 *********** 2025-06-02 14:11:24.213783 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:24.213938 | orchestrator | 2025-06-02 14:11:24.213955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:24.216096 | orchestrator | Monday 02 June 2025 14:11:24 +0000 (0:00:00.194) 0:00:09.937 *********** 2025-06-02 14:11:24.402111 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:24.402298 | orchestrator | 2025-06-02 14:11:24.403854 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 14:11:24.404962 | orchestrator | Monday 02 June 2025 14:11:24 +0000 (0:00:00.187) 0:00:10.125 *********** 2025-06-02 14:11:24.547716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:24.548744 | orchestrator | 2025-06-02 14:11:24.550226 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 14:11:24.551085 | orchestrator | Monday 02 June 2025 14:11:24 +0000 (0:00:00.146) 0:00:10.271 *********** 2025-06-02 14:11:24.751255 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '999978ba-f5e8-5970-b49f-3220d15259a2'}}) 2025-06-02 14:11:24.751356 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}}) 2025-06-02 14:11:24.751451 | orchestrator | 2025-06-02 14:11:24.751759 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 14:11:24.752182 | orchestrator | Monday 02 June 2025 14:11:24 +0000 (0:00:00.203) 0:00:10.475 *********** 2025-06-02 14:11:26.985809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'}) 2025-06-02 14:11:26.985974 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}) 2025-06-02 14:11:26.987168 | orchestrator | 2025-06-02 14:11:26.987805 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 14:11:26.988526 | orchestrator | Monday 02 June 2025 14:11:26 +0000 (0:00:02.226) 0:00:12.701 *********** 2025-06-02 14:11:27.145461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:27.145558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:27.146511 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:27.147717 | orchestrator | 2025-06-02 14:11:27.148162 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 14:11:27.149349 | orchestrator | Monday 02 June 2025 14:11:27 +0000 (0:00:00.167) 0:00:12.868 *********** 2025-06-02 14:11:28.630277 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'}) 2025-06-02 14:11:28.630929 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}) 2025-06-02 14:11:28.631712 | orchestrator | 2025-06-02 14:11:28.633030 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 14:11:28.636594 | orchestrator | Monday 02 June 2025 14:11:28 +0000 (0:00:01.483) 0:00:14.352 *********** 2025-06-02 14:11:28.789106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:28.789201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:28.789215 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:28.790151 | orchestrator | 2025-06-02 14:11:28.791292 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 14:11:28.792458 | orchestrator | Monday 02 June 2025 14:11:28 +0000 (0:00:00.159) 0:00:14.511 *********** 2025-06-02 14:11:28.924349 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:28.924478 | orchestrator | 2025-06-02 14:11:28.924554 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 14:11:28.924571 | orchestrator | Monday 02 June 2025 14:11:28 +0000 (0:00:00.135) 0:00:14.647 *********** 2025-06-02 14:11:29.339603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:29.340518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:29.340994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:29.344673 | orchestrator | 2025-06-02 14:11:29.344766 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 14:11:29.344783 | orchestrator | Monday 02 June 2025 14:11:29 +0000 (0:00:00.413) 0:00:15.060 *********** 2025-06-02 14:11:29.481941 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:29.482785 | orchestrator | 2025-06-02 14:11:29.484123 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 14:11:29.484883 | orchestrator | Monday 02 June 2025 14:11:29 +0000 (0:00:00.143) 0:00:15.204 *********** 2025-06-02 14:11:29.635909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:29.637569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:29.641746 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:29.641779 | orchestrator | 2025-06-02 14:11:29.641793 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 14:11:29.644008 | orchestrator | Monday 02 June 2025 14:11:29 +0000 (0:00:00.154) 0:00:15.359 *********** 2025-06-02 14:11:29.774938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:29.776347 | orchestrator | 2025-06-02 14:11:29.777503 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 14:11:29.781392 | orchestrator | Monday 02 June 2025 14:11:29 +0000 (0:00:00.138) 0:00:15.497 *********** 2025-06-02 14:11:29.942363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:29.943787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:29.947138 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:29.947179 | orchestrator | 2025-06-02 14:11:29.947193 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 14:11:29.947206 | orchestrator | Monday 02 June 2025 14:11:29 +0000 (0:00:00.166) 0:00:15.663 *********** 2025-06-02 14:11:30.085398 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:30.086547 | orchestrator | 2025-06-02 14:11:30.090141 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 14:11:30.090173 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.143) 0:00:15.807 *********** 2025-06-02 14:11:30.245517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:30.247768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:30.250691 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.250722 | orchestrator | 2025-06-02 14:11:30.250734 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 14:11:30.250747 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.158) 0:00:15.965 *********** 2025-06-02 14:11:30.405673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:30.406379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:30.406952 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.407653 | orchestrator | 2025-06-02 14:11:30.408381 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 14:11:30.408771 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.164) 0:00:16.130 *********** 2025-06-02 14:11:30.567985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:30.569384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:30.570791 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.574182 | orchestrator | 2025-06-02 14:11:30.574235 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 14:11:30.574249 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.161) 0:00:16.291 *********** 2025-06-02 14:11:30.702415 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.702489 | orchestrator | 2025-06-02 14:11:30.702667 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 14:11:30.703109 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.134) 0:00:16.426 *********** 2025-06-02 14:11:30.848975 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.849860 | orchestrator | 2025-06-02 14:11:30.853278 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 14:11:30.853315 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.144) 0:00:16.570 *********** 2025-06-02 14:11:30.993170 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:30.993278 | orchestrator | 2025-06-02 14:11:30.994207 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 14:11:30.994970 | orchestrator | Monday 02 June 2025 14:11:30 +0000 (0:00:00.146) 0:00:16.716 *********** 2025-06-02 14:11:31.326554 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 14:11:31.328004 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 14:11:31.328809 | orchestrator | } 2025-06-02 14:11:31.329697 | orchestrator | 2025-06-02 14:11:31.332564 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 14:11:31.333547 | orchestrator | Monday 02 June 2025 14:11:31 +0000 (0:00:00.332) 0:00:17.048 *********** 2025-06-02 14:11:31.472053 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 14:11:31.473717 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 14:11:31.474141 | orchestrator | } 2025-06-02 14:11:31.475022 | orchestrator | 2025-06-02 14:11:31.479720 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 14:11:31.479999 | orchestrator | Monday 02 June 2025 14:11:31 +0000 (0:00:00.147) 0:00:17.196 *********** 2025-06-02 14:11:31.608290 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 14:11:31.609875 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 14:11:31.611008 | orchestrator | } 2025-06-02 14:11:31.611670 | orchestrator | 2025-06-02 14:11:31.612482 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 14:11:31.616006 | orchestrator | Monday 02 June 2025 14:11:31 +0000 (0:00:00.135) 0:00:17.332 *********** 2025-06-02 14:11:32.274277 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:32.274876 | orchestrator | 2025-06-02 14:11:32.276957 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 14:11:32.277961 | orchestrator | Monday 02 June 2025 14:11:32 +0000 (0:00:00.663) 0:00:17.995 *********** 2025-06-02 14:11:32.783356 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:32.783930 | orchestrator | 2025-06-02 14:11:32.785273 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 14:11:32.786198 | orchestrator | Monday 02 June 2025 14:11:32 +0000 (0:00:00.509) 0:00:18.505 *********** 2025-06-02 14:11:33.297243 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:33.300359 | orchestrator | 2025-06-02 14:11:33.302592 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 14:11:33.303198 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.513) 0:00:19.019 *********** 2025-06-02 14:11:33.446643 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:33.447271 | orchestrator | 2025-06-02 14:11:33.448595 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 14:11:33.449236 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.150) 0:00:19.169 *********** 2025-06-02 14:11:33.553266 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:33.553948 | orchestrator | 2025-06-02 14:11:33.555272 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 14:11:33.555862 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.107) 0:00:19.277 *********** 2025-06-02 14:11:33.676056 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:33.676107 | orchestrator | 2025-06-02 14:11:33.676531 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 14:11:33.677376 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.121) 0:00:19.398 *********** 2025-06-02 14:11:33.837433 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 14:11:33.837957 | orchestrator |  "vgs_report": { 2025-06-02 14:11:33.840022 | orchestrator |  "vg": [] 2025-06-02 14:11:33.841542 | orchestrator |  } 2025-06-02 14:11:33.841867 | orchestrator | } 2025-06-02 14:11:33.843049 | orchestrator | 2025-06-02 14:11:33.844145 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 14:11:33.844931 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.161) 0:00:19.560 *********** 2025-06-02 14:11:33.976324 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:33.976429 | orchestrator | 2025-06-02 14:11:33.976446 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 14:11:33.976580 | orchestrator | Monday 02 June 2025 14:11:33 +0000 (0:00:00.137) 0:00:19.697 *********** 2025-06-02 14:11:34.103733 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:34.104930 | orchestrator | 2025-06-02 14:11:34.106641 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 14:11:34.107044 | orchestrator | Monday 02 June 2025 14:11:34 +0000 (0:00:00.128) 0:00:19.826 *********** 2025-06-02 14:11:34.542107 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:34.542330 | orchestrator | 2025-06-02 14:11:34.543570 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 14:11:34.544425 | orchestrator | Monday 02 June 2025 14:11:34 +0000 (0:00:00.440) 0:00:20.266 *********** 2025-06-02 14:11:34.687966 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:34.688646 | orchestrator | 2025-06-02 14:11:34.689573 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 14:11:34.691055 | orchestrator | Monday 02 June 2025 14:11:34 +0000 (0:00:00.145) 0:00:20.411 *********** 2025-06-02 14:11:34.837923 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:34.838885 | orchestrator | 2025-06-02 14:11:34.839613 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 14:11:34.846254 | orchestrator | Monday 02 June 2025 14:11:34 +0000 (0:00:00.149) 0:00:20.561 *********** 2025-06-02 14:11:34.979617 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:34.980454 | orchestrator | 2025-06-02 14:11:34.981455 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 14:11:34.988043 | orchestrator | Monday 02 June 2025 14:11:34 +0000 (0:00:00.142) 0:00:20.703 *********** 2025-06-02 14:11:35.126629 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.126811 | orchestrator | 2025-06-02 14:11:35.127271 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 14:11:35.128788 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.146) 0:00:20.850 *********** 2025-06-02 14:11:35.274926 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.275728 | orchestrator | 2025-06-02 14:11:35.276135 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 14:11:35.277325 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.148) 0:00:20.998 *********** 2025-06-02 14:11:35.410958 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.411059 | orchestrator | 2025-06-02 14:11:35.411606 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 14:11:35.412689 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.135) 0:00:21.134 *********** 2025-06-02 14:11:35.545334 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.545738 | orchestrator | 2025-06-02 14:11:35.546768 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 14:11:35.549506 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.133) 0:00:21.267 *********** 2025-06-02 14:11:35.685071 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.685928 | orchestrator | 2025-06-02 14:11:35.687267 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 14:11:35.687351 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.141) 0:00:21.409 *********** 2025-06-02 14:11:35.848760 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.849193 | orchestrator | 2025-06-02 14:11:35.850867 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 14:11:35.851676 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.162) 0:00:21.572 *********** 2025-06-02 14:11:35.975327 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:35.975818 | orchestrator | 2025-06-02 14:11:35.977081 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 14:11:35.977779 | orchestrator | Monday 02 June 2025 14:11:35 +0000 (0:00:00.126) 0:00:21.699 *********** 2025-06-02 14:11:36.124891 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:36.125584 | orchestrator | 2025-06-02 14:11:36.126770 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 14:11:36.127440 | orchestrator | Monday 02 June 2025 14:11:36 +0000 (0:00:00.149) 0:00:21.848 *********** 2025-06-02 14:11:36.286776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:36.286935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:36.287126 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:36.287267 | orchestrator | 2025-06-02 14:11:36.287636 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 14:11:36.287964 | orchestrator | Monday 02 June 2025 14:11:36 +0000 (0:00:00.160) 0:00:22.009 *********** 2025-06-02 14:11:36.681314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:36.681563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:36.682518 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:36.683110 | orchestrator | 2025-06-02 14:11:36.683602 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 14:11:36.683999 | orchestrator | Monday 02 June 2025 14:11:36 +0000 (0:00:00.395) 0:00:22.405 *********** 2025-06-02 14:11:36.827543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:36.829024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:36.830510 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:36.831648 | orchestrator | 2025-06-02 14:11:36.832966 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 14:11:36.835394 | orchestrator | Monday 02 June 2025 14:11:36 +0000 (0:00:00.145) 0:00:22.551 *********** 2025-06-02 14:11:36.978173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:36.980151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:36.981245 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:36.981715 | orchestrator | 2025-06-02 14:11:36.982741 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 14:11:36.985243 | orchestrator | Monday 02 June 2025 14:11:36 +0000 (0:00:00.149) 0:00:22.701 *********** 2025-06-02 14:11:37.137548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:37.138953 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:37.140230 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:37.141135 | orchestrator | 2025-06-02 14:11:37.144302 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 14:11:37.144344 | orchestrator | Monday 02 June 2025 14:11:37 +0000 (0:00:00.159) 0:00:22.860 *********** 2025-06-02 14:11:37.292131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:37.293005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:37.294000 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:37.295633 | orchestrator | 2025-06-02 14:11:37.299715 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 14:11:37.299776 | orchestrator | Monday 02 June 2025 14:11:37 +0000 (0:00:00.155) 0:00:23.015 *********** 2025-06-02 14:11:37.446383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:37.446795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:37.447891 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:37.448690 | orchestrator | 2025-06-02 14:11:37.449637 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 14:11:37.450388 | orchestrator | Monday 02 June 2025 14:11:37 +0000 (0:00:00.153) 0:00:23.169 *********** 2025-06-02 14:11:37.595742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:37.596604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:37.597923 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:37.598646 | orchestrator | 2025-06-02 14:11:37.599609 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 14:11:37.603315 | orchestrator | Monday 02 June 2025 14:11:37 +0000 (0:00:00.150) 0:00:23.319 *********** 2025-06-02 14:11:38.090298 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:38.091242 | orchestrator | 2025-06-02 14:11:38.093112 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 14:11:38.093141 | orchestrator | Monday 02 June 2025 14:11:38 +0000 (0:00:00.494) 0:00:23.814 *********** 2025-06-02 14:11:38.572099 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:38.574166 | orchestrator | 2025-06-02 14:11:38.574204 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 14:11:38.574236 | orchestrator | Monday 02 June 2025 14:11:38 +0000 (0:00:00.481) 0:00:24.295 *********** 2025-06-02 14:11:38.706405 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:11:38.708748 | orchestrator | 2025-06-02 14:11:38.712104 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 14:11:38.712141 | orchestrator | Monday 02 June 2025 14:11:38 +0000 (0:00:00.134) 0:00:24.430 *********** 2025-06-02 14:11:38.858362 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'vg_name': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}) 2025-06-02 14:11:38.859028 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'vg_name': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'}) 2025-06-02 14:11:38.859938 | orchestrator | 2025-06-02 14:11:38.861956 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 14:11:38.861995 | orchestrator | Monday 02 June 2025 14:11:38 +0000 (0:00:00.151) 0:00:24.582 *********** 2025-06-02 14:11:39.004733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:39.004809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:39.004822 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:39.005241 | orchestrator | 2025-06-02 14:11:39.005504 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 14:11:39.005907 | orchestrator | Monday 02 June 2025 14:11:38 +0000 (0:00:00.145) 0:00:24.727 *********** 2025-06-02 14:11:39.332269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:39.336489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:39.337067 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:39.337753 | orchestrator | 2025-06-02 14:11:39.338481 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 14:11:39.339170 | orchestrator | Monday 02 June 2025 14:11:39 +0000 (0:00:00.327) 0:00:25.055 *********** 2025-06-02 14:11:39.477315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'})  2025-06-02 14:11:39.478286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'})  2025-06-02 14:11:39.482108 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:11:39.482289 | orchestrator | 2025-06-02 14:11:39.483973 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 14:11:39.484683 | orchestrator | Monday 02 June 2025 14:11:39 +0000 (0:00:00.146) 0:00:25.202 *********** 2025-06-02 14:11:39.735437 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 14:11:39.736756 | orchestrator |  "lvm_report": { 2025-06-02 14:11:39.740549 | orchestrator |  "lv": [ 2025-06-02 14:11:39.740579 | orchestrator |  { 2025-06-02 14:11:39.740662 | orchestrator |  "lv_name": "osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7", 2025-06-02 14:11:39.740675 | orchestrator |  "vg_name": "ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7" 2025-06-02 14:11:39.740734 | orchestrator |  }, 2025-06-02 14:11:39.743408 | orchestrator |  { 2025-06-02 14:11:39.744574 | orchestrator |  "lv_name": "osd-block-999978ba-f5e8-5970-b49f-3220d15259a2", 2025-06-02 14:11:39.746150 | orchestrator |  "vg_name": "ceph-999978ba-f5e8-5970-b49f-3220d15259a2" 2025-06-02 14:11:39.747601 | orchestrator |  } 2025-06-02 14:11:39.748385 | orchestrator |  ], 2025-06-02 14:11:39.750310 | orchestrator |  "pv": [ 2025-06-02 14:11:39.750545 | orchestrator |  { 2025-06-02 14:11:39.751750 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 14:11:39.753035 | orchestrator |  "vg_name": "ceph-999978ba-f5e8-5970-b49f-3220d15259a2" 2025-06-02 14:11:39.753662 | orchestrator |  }, 2025-06-02 14:11:39.754472 | orchestrator |  { 2025-06-02 14:11:39.755211 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 14:11:39.755588 | orchestrator |  "vg_name": "ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7" 2025-06-02 14:11:39.756569 | orchestrator |  } 2025-06-02 14:11:39.756908 | orchestrator |  ] 2025-06-02 14:11:39.757583 | orchestrator |  } 2025-06-02 14:11:39.758006 | orchestrator | } 2025-06-02 14:11:39.758624 | orchestrator | 2025-06-02 14:11:39.759026 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 14:11:39.759723 | orchestrator | 2025-06-02 14:11:39.760227 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 14:11:39.760700 | orchestrator | Monday 02 June 2025 14:11:39 +0000 (0:00:00.257) 0:00:25.459 *********** 2025-06-02 14:11:39.970216 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 14:11:39.971233 | orchestrator | 2025-06-02 14:11:39.971280 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 14:11:39.971292 | orchestrator | Monday 02 June 2025 14:11:39 +0000 (0:00:00.230) 0:00:25.690 *********** 2025-06-02 14:11:40.179037 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:40.179775 | orchestrator | 2025-06-02 14:11:40.179804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:40.179819 | orchestrator | Monday 02 June 2025 14:11:40 +0000 (0:00:00.211) 0:00:25.902 *********** 2025-06-02 14:11:40.546209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 14:11:40.546726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 14:11:40.548302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 14:11:40.550154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 14:11:40.551178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 14:11:40.552604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 14:11:40.552957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 14:11:40.553994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 14:11:40.554904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 14:11:40.555151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 14:11:40.555693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 14:11:40.556678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 14:11:40.556901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 14:11:40.557651 | orchestrator | 2025-06-02 14:11:40.558092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:40.558394 | orchestrator | Monday 02 June 2025 14:11:40 +0000 (0:00:00.368) 0:00:26.270 *********** 2025-06-02 14:11:40.736333 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:40.737081 | orchestrator | 2025-06-02 14:11:40.737746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:40.738346 | orchestrator | Monday 02 June 2025 14:11:40 +0000 (0:00:00.190) 0:00:26.461 *********** 2025-06-02 14:11:40.913651 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:40.913816 | orchestrator | 2025-06-02 14:11:40.914268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:40.914640 | orchestrator | Monday 02 June 2025 14:11:40 +0000 (0:00:00.177) 0:00:26.639 *********** 2025-06-02 14:11:41.101218 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:41.102095 | orchestrator | 2025-06-02 14:11:41.103313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:41.103650 | orchestrator | Monday 02 June 2025 14:11:41 +0000 (0:00:00.186) 0:00:26.825 *********** 2025-06-02 14:11:41.575509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:41.575660 | orchestrator | 2025-06-02 14:11:41.576158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:41.576955 | orchestrator | Monday 02 June 2025 14:11:41 +0000 (0:00:00.473) 0:00:27.299 *********** 2025-06-02 14:11:41.764232 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:41.764577 | orchestrator | 2025-06-02 14:11:41.766201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:41.766749 | orchestrator | Monday 02 June 2025 14:11:41 +0000 (0:00:00.188) 0:00:27.487 *********** 2025-06-02 14:11:41.949163 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:41.949739 | orchestrator | 2025-06-02 14:11:41.950615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:41.951814 | orchestrator | Monday 02 June 2025 14:11:41 +0000 (0:00:00.185) 0:00:27.673 *********** 2025-06-02 14:11:42.132708 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:42.133311 | orchestrator | 2025-06-02 14:11:42.133892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:42.134690 | orchestrator | Monday 02 June 2025 14:11:42 +0000 (0:00:00.183) 0:00:27.857 *********** 2025-06-02 14:11:42.339992 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:42.340214 | orchestrator | 2025-06-02 14:11:42.341014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:42.342619 | orchestrator | Monday 02 June 2025 14:11:42 +0000 (0:00:00.206) 0:00:28.063 *********** 2025-06-02 14:11:42.709460 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959) 2025-06-02 14:11:42.709690 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959) 2025-06-02 14:11:42.710291 | orchestrator | 2025-06-02 14:11:42.710880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:42.711345 | orchestrator | Monday 02 June 2025 14:11:42 +0000 (0:00:00.370) 0:00:28.434 *********** 2025-06-02 14:11:43.103152 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7) 2025-06-02 14:11:43.103230 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7) 2025-06-02 14:11:43.103374 | orchestrator | 2025-06-02 14:11:43.103438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:43.104342 | orchestrator | Monday 02 June 2025 14:11:43 +0000 (0:00:00.390) 0:00:28.825 *********** 2025-06-02 14:11:43.484150 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b) 2025-06-02 14:11:43.484613 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b) 2025-06-02 14:11:43.485128 | orchestrator | 2025-06-02 14:11:43.485504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:43.486502 | orchestrator | Monday 02 June 2025 14:11:43 +0000 (0:00:00.382) 0:00:29.208 *********** 2025-06-02 14:11:43.893225 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857) 2025-06-02 14:11:43.894209 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857) 2025-06-02 14:11:43.894699 | orchestrator | 2025-06-02 14:11:43.895575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:11:43.896431 | orchestrator | Monday 02 June 2025 14:11:43 +0000 (0:00:00.408) 0:00:29.617 *********** 2025-06-02 14:11:44.206910 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 14:11:44.207108 | orchestrator | 2025-06-02 14:11:44.209377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:44.209873 | orchestrator | Monday 02 June 2025 14:11:44 +0000 (0:00:00.312) 0:00:29.929 *********** 2025-06-02 14:11:44.777711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 14:11:44.778301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 14:11:44.778806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 14:11:44.780382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 14:11:44.780816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 14:11:44.781908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 14:11:44.782625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 14:11:44.783122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 14:11:44.783618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 14:11:44.784100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 14:11:44.785369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 14:11:44.786087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 14:11:44.787155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 14:11:44.787924 | orchestrator | 2025-06-02 14:11:44.788901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:44.789402 | orchestrator | Monday 02 June 2025 14:11:44 +0000 (0:00:00.571) 0:00:30.501 *********** 2025-06-02 14:11:44.953712 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:44.953797 | orchestrator | 2025-06-02 14:11:44.953811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:44.953939 | orchestrator | Monday 02 June 2025 14:11:44 +0000 (0:00:00.174) 0:00:30.675 *********** 2025-06-02 14:11:45.155724 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:45.155961 | orchestrator | 2025-06-02 14:11:45.157192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:45.158106 | orchestrator | Monday 02 June 2025 14:11:45 +0000 (0:00:00.203) 0:00:30.879 *********** 2025-06-02 14:11:45.368066 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:45.368264 | orchestrator | 2025-06-02 14:11:45.369264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:45.370674 | orchestrator | Monday 02 June 2025 14:11:45 +0000 (0:00:00.210) 0:00:31.090 *********** 2025-06-02 14:11:45.558143 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:45.559509 | orchestrator | 2025-06-02 14:11:45.560897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:45.562139 | orchestrator | Monday 02 June 2025 14:11:45 +0000 (0:00:00.190) 0:00:31.281 *********** 2025-06-02 14:11:45.747170 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:45.748391 | orchestrator | 2025-06-02 14:11:45.749600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:45.750480 | orchestrator | Monday 02 June 2025 14:11:45 +0000 (0:00:00.189) 0:00:31.470 *********** 2025-06-02 14:11:45.948243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:45.948423 | orchestrator | 2025-06-02 14:11:45.949010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:45.949665 | orchestrator | Monday 02 June 2025 14:11:45 +0000 (0:00:00.201) 0:00:31.671 *********** 2025-06-02 14:11:46.152206 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:46.153130 | orchestrator | 2025-06-02 14:11:46.153161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:46.153971 | orchestrator | Monday 02 June 2025 14:11:46 +0000 (0:00:00.199) 0:00:31.871 *********** 2025-06-02 14:11:46.381568 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:46.381670 | orchestrator | 2025-06-02 14:11:46.383675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:46.383704 | orchestrator | Monday 02 June 2025 14:11:46 +0000 (0:00:00.229) 0:00:32.101 *********** 2025-06-02 14:11:47.213814 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 14:11:47.214286 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 14:11:47.214689 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 14:11:47.215697 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 14:11:47.216146 | orchestrator | 2025-06-02 14:11:47.216871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:47.217377 | orchestrator | Monday 02 June 2025 14:11:47 +0000 (0:00:00.836) 0:00:32.938 *********** 2025-06-02 14:11:47.399968 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:47.400123 | orchestrator | 2025-06-02 14:11:47.401226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:47.402541 | orchestrator | Monday 02 June 2025 14:11:47 +0000 (0:00:00.183) 0:00:33.121 *********** 2025-06-02 14:11:47.599113 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:47.600211 | orchestrator | 2025-06-02 14:11:47.600362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:47.601260 | orchestrator | Monday 02 June 2025 14:11:47 +0000 (0:00:00.200) 0:00:33.322 *********** 2025-06-02 14:11:48.262310 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:48.262789 | orchestrator | 2025-06-02 14:11:48.263903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:11:48.264770 | orchestrator | Monday 02 June 2025 14:11:48 +0000 (0:00:00.663) 0:00:33.985 *********** 2025-06-02 14:11:48.467514 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:48.467786 | orchestrator | 2025-06-02 14:11:48.468966 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 14:11:48.469733 | orchestrator | Monday 02 June 2025 14:11:48 +0000 (0:00:00.203) 0:00:34.189 *********** 2025-06-02 14:11:48.606548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:48.606706 | orchestrator | 2025-06-02 14:11:48.607593 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 14:11:48.608768 | orchestrator | Monday 02 June 2025 14:11:48 +0000 (0:00:00.140) 0:00:34.329 *********** 2025-06-02 14:11:48.811694 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}}) 2025-06-02 14:11:48.812363 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bbf0c471-2dcf-5556-af63-e058f1325c4d'}}) 2025-06-02 14:11:48.813176 | orchestrator | 2025-06-02 14:11:48.814554 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 14:11:48.815155 | orchestrator | Monday 02 June 2025 14:11:48 +0000 (0:00:00.204) 0:00:34.534 *********** 2025-06-02 14:11:51.005931 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}) 2025-06-02 14:11:51.006418 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'}) 2025-06-02 14:11:51.008894 | orchestrator | 2025-06-02 14:11:51.010110 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 14:11:51.010678 | orchestrator | Monday 02 June 2025 14:11:50 +0000 (0:00:02.193) 0:00:36.728 *********** 2025-06-02 14:11:51.158291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:51.160059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:51.161154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:51.162931 | orchestrator | 2025-06-02 14:11:51.163517 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 14:11:51.164667 | orchestrator | Monday 02 June 2025 14:11:51 +0000 (0:00:00.153) 0:00:36.881 *********** 2025-06-02 14:11:52.389311 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}) 2025-06-02 14:11:52.389509 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'}) 2025-06-02 14:11:52.390972 | orchestrator | 2025-06-02 14:11:52.392036 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 14:11:52.392950 | orchestrator | Monday 02 June 2025 14:11:52 +0000 (0:00:01.231) 0:00:38.113 *********** 2025-06-02 14:11:52.543295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:52.543904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:52.544713 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:52.547476 | orchestrator | 2025-06-02 14:11:52.547507 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 14:11:52.547522 | orchestrator | Monday 02 June 2025 14:11:52 +0000 (0:00:00.153) 0:00:38.266 *********** 2025-06-02 14:11:52.671610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:52.676750 | orchestrator | 2025-06-02 14:11:52.676781 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 14:11:52.678635 | orchestrator | Monday 02 June 2025 14:11:52 +0000 (0:00:00.128) 0:00:38.395 *********** 2025-06-02 14:11:52.829106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:52.829213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:52.830144 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:52.831465 | orchestrator | 2025-06-02 14:11:52.832078 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 14:11:52.832779 | orchestrator | Monday 02 June 2025 14:11:52 +0000 (0:00:00.156) 0:00:38.552 *********** 2025-06-02 14:11:52.967284 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:52.968150 | orchestrator | 2025-06-02 14:11:52.969088 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 14:11:52.969995 | orchestrator | Monday 02 June 2025 14:11:52 +0000 (0:00:00.137) 0:00:38.690 *********** 2025-06-02 14:11:53.123199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:53.127730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:53.128364 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:53.129813 | orchestrator | 2025-06-02 14:11:53.132911 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 14:11:53.133701 | orchestrator | Monday 02 June 2025 14:11:53 +0000 (0:00:00.156) 0:00:38.846 *********** 2025-06-02 14:11:53.455109 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:53.455209 | orchestrator | 2025-06-02 14:11:53.455950 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 14:11:53.456931 | orchestrator | Monday 02 June 2025 14:11:53 +0000 (0:00:00.331) 0:00:39.178 *********** 2025-06-02 14:11:53.605573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:53.606303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:53.607702 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:53.608862 | orchestrator | 2025-06-02 14:11:53.609328 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 14:11:53.610427 | orchestrator | Monday 02 June 2025 14:11:53 +0000 (0:00:00.151) 0:00:39.329 *********** 2025-06-02 14:11:53.759332 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:53.760014 | orchestrator | 2025-06-02 14:11:53.761005 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 14:11:53.762344 | orchestrator | Monday 02 June 2025 14:11:53 +0000 (0:00:00.152) 0:00:39.482 *********** 2025-06-02 14:11:53.909015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:53.909550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:53.911226 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:53.913323 | orchestrator | 2025-06-02 14:11:53.913973 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 14:11:53.915022 | orchestrator | Monday 02 June 2025 14:11:53 +0000 (0:00:00.150) 0:00:39.632 *********** 2025-06-02 14:11:54.075043 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:54.075970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:54.077239 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:54.078305 | orchestrator | 2025-06-02 14:11:54.079002 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 14:11:54.079722 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.165) 0:00:39.798 *********** 2025-06-02 14:11:54.233029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:54.233973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:54.234579 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:54.235802 | orchestrator | 2025-06-02 14:11:54.237362 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 14:11:54.237820 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.155) 0:00:39.954 *********** 2025-06-02 14:11:54.373898 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:54.374147 | orchestrator | 2025-06-02 14:11:54.375603 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 14:11:54.377592 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.141) 0:00:40.096 *********** 2025-06-02 14:11:54.500697 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:54.501669 | orchestrator | 2025-06-02 14:11:54.503050 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 14:11:54.503678 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.127) 0:00:40.224 *********** 2025-06-02 14:11:54.644235 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:54.644560 | orchestrator | 2025-06-02 14:11:54.646063 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 14:11:54.646897 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.142) 0:00:40.367 *********** 2025-06-02 14:11:54.817658 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 14:11:54.817754 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 14:11:54.818374 | orchestrator | } 2025-06-02 14:11:54.819276 | orchestrator | 2025-06-02 14:11:54.819973 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 14:11:54.820233 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.172) 0:00:40.539 *********** 2025-06-02 14:11:54.972297 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 14:11:54.972909 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 14:11:54.974282 | orchestrator | } 2025-06-02 14:11:54.976308 | orchestrator | 2025-06-02 14:11:54.976346 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 14:11:54.976359 | orchestrator | Monday 02 June 2025 14:11:54 +0000 (0:00:00.154) 0:00:40.694 *********** 2025-06-02 14:11:55.100915 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 14:11:55.101076 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 14:11:55.101685 | orchestrator | } 2025-06-02 14:11:55.102325 | orchestrator | 2025-06-02 14:11:55.104600 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 14:11:55.105474 | orchestrator | Monday 02 June 2025 14:11:55 +0000 (0:00:00.130) 0:00:40.824 *********** 2025-06-02 14:11:55.897118 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:55.897728 | orchestrator | 2025-06-02 14:11:55.899043 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 14:11:55.900278 | orchestrator | Monday 02 June 2025 14:11:55 +0000 (0:00:00.794) 0:00:41.619 *********** 2025-06-02 14:11:56.406868 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:56.407034 | orchestrator | 2025-06-02 14:11:56.407916 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 14:11:56.409143 | orchestrator | Monday 02 June 2025 14:11:56 +0000 (0:00:00.509) 0:00:42.128 *********** 2025-06-02 14:11:56.909374 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:56.909475 | orchestrator | 2025-06-02 14:11:56.909723 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 14:11:56.910400 | orchestrator | Monday 02 June 2025 14:11:56 +0000 (0:00:00.503) 0:00:42.632 *********** 2025-06-02 14:11:57.069601 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:11:57.070935 | orchestrator | 2025-06-02 14:11:57.071137 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 14:11:57.071248 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.161) 0:00:42.793 *********** 2025-06-02 14:11:57.182610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:57.182719 | orchestrator | 2025-06-02 14:11:57.183413 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 14:11:57.183995 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.112) 0:00:42.905 *********** 2025-06-02 14:11:57.297130 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:57.297674 | orchestrator | 2025-06-02 14:11:57.298309 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 14:11:57.299040 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.116) 0:00:43.022 *********** 2025-06-02 14:11:57.450635 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 14:11:57.450753 | orchestrator |  "vgs_report": { 2025-06-02 14:11:57.450994 | orchestrator |  "vg": [] 2025-06-02 14:11:57.451731 | orchestrator |  } 2025-06-02 14:11:57.452696 | orchestrator | } 2025-06-02 14:11:57.453547 | orchestrator | 2025-06-02 14:11:57.454527 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 14:11:57.454729 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.151) 0:00:43.173 *********** 2025-06-02 14:11:57.585480 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:57.586088 | orchestrator | 2025-06-02 14:11:57.586627 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 14:11:57.587325 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.136) 0:00:43.309 *********** 2025-06-02 14:11:57.729823 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:57.730218 | orchestrator | 2025-06-02 14:11:57.731536 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 14:11:57.732364 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.144) 0:00:43.454 *********** 2025-06-02 14:11:57.874674 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:57.875114 | orchestrator | 2025-06-02 14:11:57.875413 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 14:11:57.876254 | orchestrator | Monday 02 June 2025 14:11:57 +0000 (0:00:00.142) 0:00:43.596 *********** 2025-06-02 14:11:58.012665 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.012792 | orchestrator | 2025-06-02 14:11:58.013234 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 14:11:58.014134 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.139) 0:00:43.735 *********** 2025-06-02 14:11:58.148278 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.148482 | orchestrator | 2025-06-02 14:11:58.149071 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 14:11:58.149812 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.135) 0:00:43.871 *********** 2025-06-02 14:11:58.459634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.459961 | orchestrator | 2025-06-02 14:11:58.461127 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 14:11:58.461587 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.309) 0:00:44.181 *********** 2025-06-02 14:11:58.596912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.597068 | orchestrator | 2025-06-02 14:11:58.597493 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 14:11:58.598957 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.138) 0:00:44.320 *********** 2025-06-02 14:11:58.724995 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.725550 | orchestrator | 2025-06-02 14:11:58.726293 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 14:11:58.726733 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.128) 0:00:44.448 *********** 2025-06-02 14:11:58.862327 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:58.862466 | orchestrator | 2025-06-02 14:11:58.863302 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 14:11:58.864768 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.137) 0:00:44.585 *********** 2025-06-02 14:11:59.003596 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.004057 | orchestrator | 2025-06-02 14:11:59.005089 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 14:11:59.005773 | orchestrator | Monday 02 June 2025 14:11:58 +0000 (0:00:00.141) 0:00:44.727 *********** 2025-06-02 14:11:59.143751 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.144240 | orchestrator | 2025-06-02 14:11:59.145886 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 14:11:59.145992 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.140) 0:00:44.867 *********** 2025-06-02 14:11:59.277674 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.278091 | orchestrator | 2025-06-02 14:11:59.279086 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 14:11:59.280242 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.132) 0:00:45.000 *********** 2025-06-02 14:11:59.414173 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.415065 | orchestrator | 2025-06-02 14:11:59.417033 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 14:11:59.417083 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.136) 0:00:45.137 *********** 2025-06-02 14:11:59.553917 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.554617 | orchestrator | 2025-06-02 14:11:59.555534 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 14:11:59.556257 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.140) 0:00:45.277 *********** 2025-06-02 14:11:59.712610 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:59.716298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:59.717235 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.718448 | orchestrator | 2025-06-02 14:11:59.719176 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 14:11:59.720010 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.158) 0:00:45.436 *********** 2025-06-02 14:11:59.868701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:11:59.869711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:11:59.871012 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:11:59.871876 | orchestrator | 2025-06-02 14:11:59.872594 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 14:11:59.873370 | orchestrator | Monday 02 June 2025 14:11:59 +0000 (0:00:00.156) 0:00:45.592 *********** 2025-06-02 14:12:00.025132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:00.025727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:00.026869 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:00.027979 | orchestrator | 2025-06-02 14:12:00.028645 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 14:12:00.029528 | orchestrator | Monday 02 June 2025 14:12:00 +0000 (0:00:00.156) 0:00:45.749 *********** 2025-06-02 14:12:00.425105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:00.426155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:00.427056 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:00.427688 | orchestrator | 2025-06-02 14:12:00.428673 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 14:12:00.429073 | orchestrator | Monday 02 June 2025 14:12:00 +0000 (0:00:00.398) 0:00:46.147 *********** 2025-06-02 14:12:00.577761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:00.578252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:00.579189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:00.580364 | orchestrator | 2025-06-02 14:12:00.582150 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 14:12:00.583305 | orchestrator | Monday 02 June 2025 14:12:00 +0000 (0:00:00.154) 0:00:46.301 *********** 2025-06-02 14:12:00.731198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:00.731306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:00.731572 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:00.732648 | orchestrator | 2025-06-02 14:12:00.733485 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 14:12:00.734112 | orchestrator | Monday 02 June 2025 14:12:00 +0000 (0:00:00.153) 0:00:46.454 *********** 2025-06-02 14:12:00.884107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:00.884268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:00.885066 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:00.885931 | orchestrator | 2025-06-02 14:12:00.887119 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 14:12:00.888669 | orchestrator | Monday 02 June 2025 14:12:00 +0000 (0:00:00.152) 0:00:46.607 *********** 2025-06-02 14:12:01.048203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:01.048288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:01.048670 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:01.049592 | orchestrator | 2025-06-02 14:12:01.050059 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 14:12:01.050580 | orchestrator | Monday 02 June 2025 14:12:01 +0000 (0:00:00.164) 0:00:46.771 *********** 2025-06-02 14:12:01.593735 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:12:01.594148 | orchestrator | 2025-06-02 14:12:01.594801 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 14:12:01.596594 | orchestrator | Monday 02 June 2025 14:12:01 +0000 (0:00:00.544) 0:00:47.316 *********** 2025-06-02 14:12:02.111513 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:12:02.112187 | orchestrator | 2025-06-02 14:12:02.113735 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 14:12:02.115444 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.518) 0:00:47.835 *********** 2025-06-02 14:12:02.264267 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:12:02.264430 | orchestrator | 2025-06-02 14:12:02.265490 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 14:12:02.268051 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.152) 0:00:47.987 *********** 2025-06-02 14:12:02.434193 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'vg_name': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}) 2025-06-02 14:12:02.434627 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'vg_name': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'}) 2025-06-02 14:12:02.435412 | orchestrator | 2025-06-02 14:12:02.436318 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 14:12:02.437112 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.169) 0:00:48.157 *********** 2025-06-02 14:12:02.595878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:02.596583 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:02.597489 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:02.598394 | orchestrator | 2025-06-02 14:12:02.599085 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 14:12:02.600303 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.162) 0:00:48.319 *********** 2025-06-02 14:12:02.752490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:02.752596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:02.752704 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:02.753951 | orchestrator | 2025-06-02 14:12:02.754157 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 14:12:02.755690 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.155) 0:00:48.475 *********** 2025-06-02 14:12:02.903188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'})  2025-06-02 14:12:02.903355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'})  2025-06-02 14:12:02.903937 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:02.904787 | orchestrator | 2025-06-02 14:12:02.906179 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 14:12:02.906717 | orchestrator | Monday 02 June 2025 14:12:02 +0000 (0:00:00.151) 0:00:48.626 *********** 2025-06-02 14:12:03.374314 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 14:12:03.375250 | orchestrator |  "lvm_report": { 2025-06-02 14:12:03.375654 | orchestrator |  "lv": [ 2025-06-02 14:12:03.377265 | orchestrator |  { 2025-06-02 14:12:03.377627 | orchestrator |  "lv_name": "osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10", 2025-06-02 14:12:03.379023 | orchestrator |  "vg_name": "ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10" 2025-06-02 14:12:03.380241 | orchestrator |  }, 2025-06-02 14:12:03.381821 | orchestrator |  { 2025-06-02 14:12:03.382962 | orchestrator |  "lv_name": "osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d", 2025-06-02 14:12:03.383311 | orchestrator |  "vg_name": "ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d" 2025-06-02 14:12:03.384149 | orchestrator |  } 2025-06-02 14:12:03.384710 | orchestrator |  ], 2025-06-02 14:12:03.385522 | orchestrator |  "pv": [ 2025-06-02 14:12:03.386418 | orchestrator |  { 2025-06-02 14:12:03.386618 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 14:12:03.387325 | orchestrator |  "vg_name": "ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10" 2025-06-02 14:12:03.387669 | orchestrator |  }, 2025-06-02 14:12:03.387971 | orchestrator |  { 2025-06-02 14:12:03.389438 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 14:12:03.389994 | orchestrator |  "vg_name": "ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d" 2025-06-02 14:12:03.390868 | orchestrator |  } 2025-06-02 14:12:03.391309 | orchestrator |  ] 2025-06-02 14:12:03.392055 | orchestrator |  } 2025-06-02 14:12:03.392507 | orchestrator | } 2025-06-02 14:12:03.393022 | orchestrator | 2025-06-02 14:12:03.393450 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 14:12:03.393750 | orchestrator | 2025-06-02 14:12:03.394346 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 14:12:03.394986 | orchestrator | Monday 02 June 2025 14:12:03 +0000 (0:00:00.470) 0:00:49.096 *********** 2025-06-02 14:12:03.613718 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 14:12:03.614362 | orchestrator | 2025-06-02 14:12:03.614684 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 14:12:03.616040 | orchestrator | Monday 02 June 2025 14:12:03 +0000 (0:00:00.240) 0:00:49.337 *********** 2025-06-02 14:12:03.834590 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:03.835326 | orchestrator | 2025-06-02 14:12:03.836142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:03.837289 | orchestrator | Monday 02 June 2025 14:12:03 +0000 (0:00:00.221) 0:00:49.558 *********** 2025-06-02 14:12:04.227578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 14:12:04.229030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 14:12:04.229160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 14:12:04.230360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 14:12:04.231444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 14:12:04.232725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 14:12:04.233727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 14:12:04.234136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 14:12:04.234645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 14:12:04.234951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 14:12:04.235543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 14:12:04.235875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 14:12:04.236448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 14:12:04.236697 | orchestrator | 2025-06-02 14:12:04.237566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:04.237789 | orchestrator | Monday 02 June 2025 14:12:04 +0000 (0:00:00.392) 0:00:49.951 *********** 2025-06-02 14:12:04.493549 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:04.494785 | orchestrator | 2025-06-02 14:12:04.494820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:04.494882 | orchestrator | Monday 02 June 2025 14:12:04 +0000 (0:00:00.262) 0:00:50.214 *********** 2025-06-02 14:12:04.695393 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:04.695486 | orchestrator | 2025-06-02 14:12:04.696378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:04.697044 | orchestrator | Monday 02 June 2025 14:12:04 +0000 (0:00:00.204) 0:00:50.418 *********** 2025-06-02 14:12:04.891583 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:04.891673 | orchestrator | 2025-06-02 14:12:04.893416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:04.895447 | orchestrator | Monday 02 June 2025 14:12:04 +0000 (0:00:00.195) 0:00:50.614 *********** 2025-06-02 14:12:05.083338 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:05.084550 | orchestrator | 2025-06-02 14:12:05.085619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:05.086544 | orchestrator | Monday 02 June 2025 14:12:05 +0000 (0:00:00.192) 0:00:50.807 *********** 2025-06-02 14:12:05.277904 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:05.277993 | orchestrator | 2025-06-02 14:12:05.278983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:05.279741 | orchestrator | Monday 02 June 2025 14:12:05 +0000 (0:00:00.193) 0:00:51.001 *********** 2025-06-02 14:12:05.861367 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:05.863213 | orchestrator | 2025-06-02 14:12:05.863658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:05.864521 | orchestrator | Monday 02 June 2025 14:12:05 +0000 (0:00:00.581) 0:00:51.582 *********** 2025-06-02 14:12:06.060040 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:06.060202 | orchestrator | 2025-06-02 14:12:06.063981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:06.064063 | orchestrator | Monday 02 June 2025 14:12:06 +0000 (0:00:00.200) 0:00:51.782 *********** 2025-06-02 14:12:06.273128 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:06.273290 | orchestrator | 2025-06-02 14:12:06.273996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:06.275211 | orchestrator | Monday 02 June 2025 14:12:06 +0000 (0:00:00.213) 0:00:51.996 *********** 2025-06-02 14:12:06.681633 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7) 2025-06-02 14:12:06.681732 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7) 2025-06-02 14:12:06.682095 | orchestrator | 2025-06-02 14:12:06.684347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:06.684444 | orchestrator | Monday 02 June 2025 14:12:06 +0000 (0:00:00.407) 0:00:52.403 *********** 2025-06-02 14:12:07.109004 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb) 2025-06-02 14:12:07.109204 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb) 2025-06-02 14:12:07.110938 | orchestrator | 2025-06-02 14:12:07.110963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:07.112033 | orchestrator | Monday 02 June 2025 14:12:07 +0000 (0:00:00.429) 0:00:52.833 *********** 2025-06-02 14:12:07.538871 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000) 2025-06-02 14:12:07.539491 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000) 2025-06-02 14:12:07.539720 | orchestrator | 2025-06-02 14:12:07.539744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:07.539756 | orchestrator | Monday 02 June 2025 14:12:07 +0000 (0:00:00.427) 0:00:53.260 *********** 2025-06-02 14:12:07.964951 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0) 2025-06-02 14:12:07.968633 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0) 2025-06-02 14:12:07.969023 | orchestrator | 2025-06-02 14:12:07.970257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 14:12:07.971725 | orchestrator | Monday 02 June 2025 14:12:07 +0000 (0:00:00.427) 0:00:53.687 *********** 2025-06-02 14:12:08.295267 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 14:12:08.295410 | orchestrator | 2025-06-02 14:12:08.296505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:08.297054 | orchestrator | Monday 02 June 2025 14:12:08 +0000 (0:00:00.331) 0:00:54.018 *********** 2025-06-02 14:12:08.721062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 14:12:08.722515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 14:12:08.723125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 14:12:08.723538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 14:12:08.724153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 14:12:08.725332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 14:12:08.725644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 14:12:08.726318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 14:12:08.727533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 14:12:08.727554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 14:12:08.728470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 14:12:08.729235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 14:12:08.729673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 14:12:08.730363 | orchestrator | 2025-06-02 14:12:08.730895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:08.731360 | orchestrator | Monday 02 June 2025 14:12:08 +0000 (0:00:00.424) 0:00:54.442 *********** 2025-06-02 14:12:08.917362 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:08.917533 | orchestrator | 2025-06-02 14:12:08.918546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:08.918568 | orchestrator | Monday 02 June 2025 14:12:08 +0000 (0:00:00.197) 0:00:54.640 *********** 2025-06-02 14:12:09.110765 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:09.111755 | orchestrator | 2025-06-02 14:12:09.112692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:09.113349 | orchestrator | Monday 02 June 2025 14:12:09 +0000 (0:00:00.193) 0:00:54.833 *********** 2025-06-02 14:12:09.745023 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:09.746012 | orchestrator | 2025-06-02 14:12:09.747478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:09.747503 | orchestrator | Monday 02 June 2025 14:12:09 +0000 (0:00:00.633) 0:00:55.467 *********** 2025-06-02 14:12:09.954894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:09.956289 | orchestrator | 2025-06-02 14:12:09.957748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:09.957773 | orchestrator | Monday 02 June 2025 14:12:09 +0000 (0:00:00.210) 0:00:55.677 *********** 2025-06-02 14:12:10.197753 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:10.198479 | orchestrator | 2025-06-02 14:12:10.199305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:10.200421 | orchestrator | Monday 02 June 2025 14:12:10 +0000 (0:00:00.241) 0:00:55.918 *********** 2025-06-02 14:12:10.397137 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:10.398648 | orchestrator | 2025-06-02 14:12:10.399975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:10.401788 | orchestrator | Monday 02 June 2025 14:12:10 +0000 (0:00:00.201) 0:00:56.120 *********** 2025-06-02 14:12:10.602788 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:10.603470 | orchestrator | 2025-06-02 14:12:10.603686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:10.605152 | orchestrator | Monday 02 June 2025 14:12:10 +0000 (0:00:00.204) 0:00:56.325 *********** 2025-06-02 14:12:10.807969 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:10.808354 | orchestrator | 2025-06-02 14:12:10.809265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:10.809657 | orchestrator | Monday 02 June 2025 14:12:10 +0000 (0:00:00.205) 0:00:56.531 *********** 2025-06-02 14:12:11.438743 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 14:12:11.439432 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 14:12:11.439511 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 14:12:11.439907 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 14:12:11.440532 | orchestrator | 2025-06-02 14:12:11.440810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:11.441094 | orchestrator | Monday 02 June 2025 14:12:11 +0000 (0:00:00.630) 0:00:57.162 *********** 2025-06-02 14:12:11.646329 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:11.647883 | orchestrator | 2025-06-02 14:12:11.648290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:11.648315 | orchestrator | Monday 02 June 2025 14:12:11 +0000 (0:00:00.205) 0:00:57.367 *********** 2025-06-02 14:12:11.845850 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:11.846099 | orchestrator | 2025-06-02 14:12:11.848386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:11.849008 | orchestrator | Monday 02 June 2025 14:12:11 +0000 (0:00:00.201) 0:00:57.568 *********** 2025-06-02 14:12:12.044503 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:12.045209 | orchestrator | 2025-06-02 14:12:12.046319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 14:12:12.047223 | orchestrator | Monday 02 June 2025 14:12:12 +0000 (0:00:00.198) 0:00:57.767 *********** 2025-06-02 14:12:12.238099 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:12.239271 | orchestrator | 2025-06-02 14:12:12.239651 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 14:12:12.240278 | orchestrator | Monday 02 June 2025 14:12:12 +0000 (0:00:00.194) 0:00:57.962 *********** 2025-06-02 14:12:12.594498 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:12.595022 | orchestrator | 2025-06-02 14:12:12.595808 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 14:12:12.597469 | orchestrator | Monday 02 June 2025 14:12:12 +0000 (0:00:00.354) 0:00:58.316 *********** 2025-06-02 14:12:12.779354 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}}) 2025-06-02 14:12:12.779581 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c542c38e-2fd0-548c-8c9f-0ca498087064'}}) 2025-06-02 14:12:12.780307 | orchestrator | 2025-06-02 14:12:12.781122 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 14:12:12.781795 | orchestrator | Monday 02 June 2025 14:12:12 +0000 (0:00:00.186) 0:00:58.503 *********** 2025-06-02 14:12:14.842201 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}) 2025-06-02 14:12:14.842357 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'}) 2025-06-02 14:12:14.842366 | orchestrator | 2025-06-02 14:12:14.842413 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 14:12:14.842588 | orchestrator | Monday 02 June 2025 14:12:14 +0000 (0:00:02.049) 0:01:00.552 *********** 2025-06-02 14:12:14.982157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:14.982407 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:14.982982 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:14.983915 | orchestrator | 2025-06-02 14:12:14.984977 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 14:12:14.985322 | orchestrator | Monday 02 June 2025 14:12:14 +0000 (0:00:00.151) 0:01:00.703 *********** 2025-06-02 14:12:16.313323 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}) 2025-06-02 14:12:16.313448 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'}) 2025-06-02 14:12:16.313463 | orchestrator | 2025-06-02 14:12:16.313475 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 14:12:16.313487 | orchestrator | Monday 02 June 2025 14:12:16 +0000 (0:00:01.330) 0:01:02.034 *********** 2025-06-02 14:12:16.494635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:16.494728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:16.495321 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:16.495469 | orchestrator | 2025-06-02 14:12:16.496716 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 14:12:16.498219 | orchestrator | Monday 02 June 2025 14:12:16 +0000 (0:00:00.181) 0:01:02.216 *********** 2025-06-02 14:12:16.639235 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:16.640188 | orchestrator | 2025-06-02 14:12:16.641132 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 14:12:16.642140 | orchestrator | Monday 02 June 2025 14:12:16 +0000 (0:00:00.146) 0:01:02.362 *********** 2025-06-02 14:12:16.787638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:16.787880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:16.788496 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:16.789143 | orchestrator | 2025-06-02 14:12:16.789613 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 14:12:16.790678 | orchestrator | Monday 02 June 2025 14:12:16 +0000 (0:00:00.147) 0:01:02.510 *********** 2025-06-02 14:12:16.929179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:16.930567 | orchestrator | 2025-06-02 14:12:16.930858 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 14:12:16.932368 | orchestrator | Monday 02 June 2025 14:12:16 +0000 (0:00:00.141) 0:01:02.652 *********** 2025-06-02 14:12:17.078289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:17.078776 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:17.079803 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:17.080592 | orchestrator | 2025-06-02 14:12:17.082321 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 14:12:17.082674 | orchestrator | Monday 02 June 2025 14:12:17 +0000 (0:00:00.149) 0:01:02.801 *********** 2025-06-02 14:12:17.224635 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:17.225122 | orchestrator | 2025-06-02 14:12:17.225961 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 14:12:17.226598 | orchestrator | Monday 02 June 2025 14:12:17 +0000 (0:00:00.146) 0:01:02.948 *********** 2025-06-02 14:12:17.373804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:17.373963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:17.374455 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:17.374967 | orchestrator | 2025-06-02 14:12:17.375620 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 14:12:17.376311 | orchestrator | Monday 02 June 2025 14:12:17 +0000 (0:00:00.149) 0:01:03.098 *********** 2025-06-02 14:12:17.527983 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:17.528173 | orchestrator | 2025-06-02 14:12:17.529804 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 14:12:17.530560 | orchestrator | Monday 02 June 2025 14:12:17 +0000 (0:00:00.153) 0:01:03.251 *********** 2025-06-02 14:12:17.974635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:17.976988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:17.978284 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:17.980172 | orchestrator | 2025-06-02 14:12:17.980189 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 14:12:17.980714 | orchestrator | Monday 02 June 2025 14:12:17 +0000 (0:00:00.445) 0:01:03.697 *********** 2025-06-02 14:12:18.126411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:18.127761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:18.129585 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:18.130981 | orchestrator | 2025-06-02 14:12:18.132057 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 14:12:18.132967 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.151) 0:01:03.848 *********** 2025-06-02 14:12:18.291774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:18.293598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:18.295262 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:18.296797 | orchestrator | 2025-06-02 14:12:18.298130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 14:12:18.299027 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.166) 0:01:04.015 *********** 2025-06-02 14:12:18.428467 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:18.430094 | orchestrator | 2025-06-02 14:12:18.432112 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 14:12:18.433051 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.136) 0:01:04.152 *********** 2025-06-02 14:12:18.570204 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:18.571345 | orchestrator | 2025-06-02 14:12:18.573028 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 14:12:18.575023 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.141) 0:01:04.293 *********** 2025-06-02 14:12:18.703515 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:18.704535 | orchestrator | 2025-06-02 14:12:18.705799 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 14:12:18.706929 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.132) 0:01:04.426 *********** 2025-06-02 14:12:18.841980 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 14:12:18.843181 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 14:12:18.844478 | orchestrator | } 2025-06-02 14:12:18.845457 | orchestrator | 2025-06-02 14:12:18.846260 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 14:12:18.847530 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.139) 0:01:04.565 *********** 2025-06-02 14:12:18.994277 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 14:12:18.995494 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 14:12:18.996641 | orchestrator | } 2025-06-02 14:12:19.001879 | orchestrator | 2025-06-02 14:12:19.002635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 14:12:19.003458 | orchestrator | Monday 02 June 2025 14:12:18 +0000 (0:00:00.151) 0:01:04.717 *********** 2025-06-02 14:12:19.133823 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 14:12:19.134694 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 14:12:19.135931 | orchestrator | } 2025-06-02 14:12:19.137252 | orchestrator | 2025-06-02 14:12:19.137544 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 14:12:19.138349 | orchestrator | Monday 02 June 2025 14:12:19 +0000 (0:00:00.139) 0:01:04.856 *********** 2025-06-02 14:12:19.665342 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:19.665444 | orchestrator | 2025-06-02 14:12:19.665460 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 14:12:19.665527 | orchestrator | Monday 02 June 2025 14:12:19 +0000 (0:00:00.532) 0:01:05.389 *********** 2025-06-02 14:12:20.184293 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:20.184399 | orchestrator | 2025-06-02 14:12:20.184517 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 14:12:20.184537 | orchestrator | Monday 02 June 2025 14:12:20 +0000 (0:00:00.517) 0:01:05.906 *********** 2025-06-02 14:12:20.720761 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:20.721007 | orchestrator | 2025-06-02 14:12:20.721747 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 14:12:20.722623 | orchestrator | Monday 02 June 2025 14:12:20 +0000 (0:00:00.537) 0:01:06.444 *********** 2025-06-02 14:12:21.126746 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:21.127029 | orchestrator | 2025-06-02 14:12:21.127796 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 14:12:21.128263 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.404) 0:01:06.849 *********** 2025-06-02 14:12:21.248820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:21.249293 | orchestrator | 2025-06-02 14:12:21.250343 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 14:12:21.251265 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.123) 0:01:06.972 *********** 2025-06-02 14:12:21.365412 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:21.365517 | orchestrator | 2025-06-02 14:12:21.366889 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 14:12:21.367465 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.116) 0:01:07.089 *********** 2025-06-02 14:12:21.510233 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 14:12:21.510343 | orchestrator |  "vgs_report": { 2025-06-02 14:12:21.511819 | orchestrator |  "vg": [] 2025-06-02 14:12:21.513938 | orchestrator |  } 2025-06-02 14:12:21.514174 | orchestrator | } 2025-06-02 14:12:21.516143 | orchestrator | 2025-06-02 14:12:21.517079 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 14:12:21.517862 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.144) 0:01:07.233 *********** 2025-06-02 14:12:21.656560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:21.657552 | orchestrator | 2025-06-02 14:12:21.658383 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 14:12:21.659179 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.147) 0:01:07.380 *********** 2025-06-02 14:12:21.815299 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:21.815983 | orchestrator | 2025-06-02 14:12:21.819255 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 14:12:21.819292 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.157) 0:01:07.538 *********** 2025-06-02 14:12:21.957941 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:21.958121 | orchestrator | 2025-06-02 14:12:21.958238 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 14:12:21.959187 | orchestrator | Monday 02 June 2025 14:12:21 +0000 (0:00:00.141) 0:01:07.679 *********** 2025-06-02 14:12:22.101482 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:22.102414 | orchestrator | 2025-06-02 14:12:22.103845 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 14:12:22.105616 | orchestrator | Monday 02 June 2025 14:12:22 +0000 (0:00:00.144) 0:01:07.824 *********** 2025-06-02 14:12:22.228994 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:22.229766 | orchestrator | 2025-06-02 14:12:22.231133 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 14:12:22.233255 | orchestrator | Monday 02 June 2025 14:12:22 +0000 (0:00:00.127) 0:01:07.951 *********** 2025-06-02 14:12:22.375146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:22.375236 | orchestrator | 2025-06-02 14:12:22.375546 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 14:12:22.376584 | orchestrator | Monday 02 June 2025 14:12:22 +0000 (0:00:00.145) 0:01:08.097 *********** 2025-06-02 14:12:22.509347 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:22.510527 | orchestrator | 2025-06-02 14:12:22.511046 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 14:12:22.512447 | orchestrator | Monday 02 June 2025 14:12:22 +0000 (0:00:00.135) 0:01:08.232 *********** 2025-06-02 14:12:22.662211 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:22.662810 | orchestrator | 2025-06-02 14:12:22.663934 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 14:12:22.664408 | orchestrator | Monday 02 June 2025 14:12:22 +0000 (0:00:00.153) 0:01:08.386 *********** 2025-06-02 14:12:23.051254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.052014 | orchestrator | 2025-06-02 14:12:23.052383 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 14:12:23.052999 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.388) 0:01:08.774 *********** 2025-06-02 14:12:23.192611 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.193317 | orchestrator | 2025-06-02 14:12:23.194540 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 14:12:23.195410 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.141) 0:01:08.916 *********** 2025-06-02 14:12:23.338928 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.339608 | orchestrator | 2025-06-02 14:12:23.340318 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 14:12:23.341606 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.146) 0:01:09.062 *********** 2025-06-02 14:12:23.511802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.512614 | orchestrator | 2025-06-02 14:12:23.513216 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 14:12:23.515560 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.172) 0:01:09.235 *********** 2025-06-02 14:12:23.652759 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.653159 | orchestrator | 2025-06-02 14:12:23.654970 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 14:12:23.656578 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.140) 0:01:09.376 *********** 2025-06-02 14:12:23.797179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.797968 | orchestrator | 2025-06-02 14:12:23.798501 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 14:12:23.799638 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.144) 0:01:09.520 *********** 2025-06-02 14:12:23.950307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:23.951296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:23.952456 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:23.954154 | orchestrator | 2025-06-02 14:12:23.955162 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 14:12:23.955339 | orchestrator | Monday 02 June 2025 14:12:23 +0000 (0:00:00.152) 0:01:09.673 *********** 2025-06-02 14:12:24.111235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:24.111903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:24.112947 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:24.113620 | orchestrator | 2025-06-02 14:12:24.115533 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 14:12:24.115947 | orchestrator | Monday 02 June 2025 14:12:24 +0000 (0:00:00.159) 0:01:09.832 *********** 2025-06-02 14:12:24.276256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:24.277667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:24.278278 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:24.279405 | orchestrator | 2025-06-02 14:12:24.281950 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 14:12:24.282818 | orchestrator | Monday 02 June 2025 14:12:24 +0000 (0:00:00.166) 0:01:09.998 *********** 2025-06-02 14:12:24.425135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:24.425601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:24.427089 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:24.427369 | orchestrator | 2025-06-02 14:12:24.428433 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 14:12:24.429188 | orchestrator | Monday 02 June 2025 14:12:24 +0000 (0:00:00.149) 0:01:10.148 *********** 2025-06-02 14:12:24.585417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:24.586014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:24.587394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:24.588299 | orchestrator | 2025-06-02 14:12:24.589298 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 14:12:24.590438 | orchestrator | Monday 02 June 2025 14:12:24 +0000 (0:00:00.160) 0:01:10.309 *********** 2025-06-02 14:12:24.734982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:24.735819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:24.736621 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:24.737379 | orchestrator | 2025-06-02 14:12:24.738001 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 14:12:24.739599 | orchestrator | Monday 02 June 2025 14:12:24 +0000 (0:00:00.149) 0:01:10.458 *********** 2025-06-02 14:12:25.195345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:25.195517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:25.196319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:25.196425 | orchestrator | 2025-06-02 14:12:25.196936 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 14:12:25.197243 | orchestrator | Monday 02 June 2025 14:12:25 +0000 (0:00:00.459) 0:01:10.918 *********** 2025-06-02 14:12:25.349661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:25.349880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:25.350606 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:25.351557 | orchestrator | 2025-06-02 14:12:25.352371 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 14:12:25.352722 | orchestrator | Monday 02 June 2025 14:12:25 +0000 (0:00:00.155) 0:01:11.073 *********** 2025-06-02 14:12:25.873398 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:25.874069 | orchestrator | 2025-06-02 14:12:25.875156 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 14:12:25.875876 | orchestrator | Monday 02 June 2025 14:12:25 +0000 (0:00:00.523) 0:01:11.597 *********** 2025-06-02 14:12:26.419101 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:26.420254 | orchestrator | 2025-06-02 14:12:26.420465 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 14:12:26.421684 | orchestrator | Monday 02 June 2025 14:12:26 +0000 (0:00:00.543) 0:01:12.140 *********** 2025-06-02 14:12:26.571595 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:26.572313 | orchestrator | 2025-06-02 14:12:26.573086 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 14:12:26.574227 | orchestrator | Monday 02 June 2025 14:12:26 +0000 (0:00:00.154) 0:01:12.295 *********** 2025-06-02 14:12:26.743871 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'vg_name': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}) 2025-06-02 14:12:26.744687 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'vg_name': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'}) 2025-06-02 14:12:26.745388 | orchestrator | 2025-06-02 14:12:26.746316 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 14:12:26.747065 | orchestrator | Monday 02 June 2025 14:12:26 +0000 (0:00:00.172) 0:01:12.467 *********** 2025-06-02 14:12:26.907087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:26.907595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:26.908281 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:26.909006 | orchestrator | 2025-06-02 14:12:26.909489 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 14:12:26.910303 | orchestrator | Monday 02 June 2025 14:12:26 +0000 (0:00:00.161) 0:01:12.629 *********** 2025-06-02 14:12:27.070135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:27.070335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:27.071439 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:27.072298 | orchestrator | 2025-06-02 14:12:27.072817 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 14:12:27.073885 | orchestrator | Monday 02 June 2025 14:12:27 +0000 (0:00:00.163) 0:01:12.793 *********** 2025-06-02 14:12:27.234591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'})  2025-06-02 14:12:27.234711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'})  2025-06-02 14:12:27.234802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:27.235144 | orchestrator | 2025-06-02 14:12:27.235634 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 14:12:27.236372 | orchestrator | Monday 02 June 2025 14:12:27 +0000 (0:00:00.164) 0:01:12.957 *********** 2025-06-02 14:12:27.381703 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 14:12:27.382933 | orchestrator |  "lvm_report": { 2025-06-02 14:12:27.383353 | orchestrator |  "lv": [ 2025-06-02 14:12:27.384253 | orchestrator |  { 2025-06-02 14:12:27.385155 | orchestrator |  "lv_name": "osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d", 2025-06-02 14:12:27.386287 | orchestrator |  "vg_name": "ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d" 2025-06-02 14:12:27.387036 | orchestrator |  }, 2025-06-02 14:12:27.387969 | orchestrator |  { 2025-06-02 14:12:27.389111 | orchestrator |  "lv_name": "osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064", 2025-06-02 14:12:27.389571 | orchestrator |  "vg_name": "ceph-c542c38e-2fd0-548c-8c9f-0ca498087064" 2025-06-02 14:12:27.390267 | orchestrator |  } 2025-06-02 14:12:27.390980 | orchestrator |  ], 2025-06-02 14:12:27.391927 | orchestrator |  "pv": [ 2025-06-02 14:12:27.392118 | orchestrator |  { 2025-06-02 14:12:27.392766 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 14:12:27.393298 | orchestrator |  "vg_name": "ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d" 2025-06-02 14:12:27.393912 | orchestrator |  }, 2025-06-02 14:12:27.394545 | orchestrator |  { 2025-06-02 14:12:27.394774 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 14:12:27.395139 | orchestrator |  "vg_name": "ceph-c542c38e-2fd0-548c-8c9f-0ca498087064" 2025-06-02 14:12:27.395558 | orchestrator |  } 2025-06-02 14:12:27.395871 | orchestrator |  ] 2025-06-02 14:12:27.396421 | orchestrator |  } 2025-06-02 14:12:27.396618 | orchestrator | } 2025-06-02 14:12:27.397094 | orchestrator | 2025-06-02 14:12:27.397455 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:12:27.397747 | orchestrator | 2025-06-02 14:12:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 14:12:27.397913 | orchestrator | 2025-06-02 14:12:27 | INFO  | Please wait and do not abort execution. 2025-06-02 14:12:27.398477 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 14:12:27.398787 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 14:12:27.399158 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 14:12:27.399417 | orchestrator | 2025-06-02 14:12:27.399777 | orchestrator | 2025-06-02 14:12:27.400087 | orchestrator | 2025-06-02 14:12:27.400409 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:12:27.400698 | orchestrator | Monday 02 June 2025 14:12:27 +0000 (0:00:00.148) 0:01:13.105 *********** 2025-06-02 14:12:27.401056 | orchestrator | =============================================================================== 2025-06-02 14:12:27.401520 | orchestrator | Create block VGs -------------------------------------------------------- 6.47s 2025-06-02 14:12:27.401710 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2025-06-02 14:12:27.402064 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.99s 2025-06-02 14:12:27.402408 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-06-02 14:12:27.402675 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-06-02 14:12:27.403041 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2025-06-02 14:12:27.403328 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-06-02 14:12:27.403595 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2025-06-02 14:12:27.403895 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-06-02 14:12:27.404176 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-06-02 14:12:27.404576 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2025-06-02 14:12:27.404773 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-06-02 14:12:27.405113 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-06-02 14:12:27.405870 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.77s 2025-06-02 14:12:27.405985 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.75s 2025-06-02 14:12:27.406004 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2025-06-02 14:12:27.406312 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-02 14:12:27.406537 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.72s 2025-06-02 14:12:27.406815 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.72s 2025-06-02 14:12:27.407112 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.72s 2025-06-02 14:12:29.891978 | orchestrator | Registering Redlock._acquired_script 2025-06-02 14:12:29.892091 | orchestrator | Registering Redlock._extend_script 2025-06-02 14:12:29.892106 | orchestrator | Registering Redlock._release_script 2025-06-02 14:12:29.951443 | orchestrator | 2025-06-02 14:12:29 | INFO  | Task facf7bd8-e992-47a4-828a-159eb8d3b9ba (facts) was prepared for execution. 2025-06-02 14:12:29.951535 | orchestrator | 2025-06-02 14:12:29 | INFO  | It takes a moment until task facf7bd8-e992-47a4-828a-159eb8d3b9ba (facts) has been started and output is visible here. 2025-06-02 14:12:33.869284 | orchestrator | 2025-06-02 14:12:33.869515 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 14:12:33.870957 | orchestrator | 2025-06-02 14:12:33.871915 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 14:12:33.872718 | orchestrator | Monday 02 June 2025 14:12:33 +0000 (0:00:00.227) 0:00:00.227 *********** 2025-06-02 14:12:35.324231 | orchestrator | ok: [testbed-manager] 2025-06-02 14:12:35.326436 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:12:35.326473 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:12:35.327873 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:12:35.328851 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:12:35.329586 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:12:35.330268 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:35.334129 | orchestrator | 2025-06-02 14:12:35.334165 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 14:12:35.334178 | orchestrator | Monday 02 June 2025 14:12:35 +0000 (0:00:01.453) 0:00:01.680 *********** 2025-06-02 14:12:35.469335 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:12:35.542141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:12:35.616242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:12:35.688434 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:12:35.760327 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:12:36.399567 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:36.399955 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:36.401347 | orchestrator | 2025-06-02 14:12:36.403072 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 14:12:36.403110 | orchestrator | 2025-06-02 14:12:36.403438 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 14:12:36.404017 | orchestrator | Monday 02 June 2025 14:12:36 +0000 (0:00:01.079) 0:00:02.760 *********** 2025-06-02 14:12:41.117947 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:12:41.119265 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:12:41.119943 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:12:41.121028 | orchestrator | ok: [testbed-manager] 2025-06-02 14:12:41.122413 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:12:41.123597 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:12:41.124416 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:12:41.124598 | orchestrator | 2025-06-02 14:12:41.125643 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 14:12:41.126424 | orchestrator | 2025-06-02 14:12:41.127149 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 14:12:41.128794 | orchestrator | Monday 02 June 2025 14:12:41 +0000 (0:00:04.718) 0:00:07.478 *********** 2025-06-02 14:12:41.276132 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:12:41.368358 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:12:41.440946 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:12:41.525259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:12:41.605159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:12:41.656307 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:12:41.656724 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:12:41.657774 | orchestrator | 2025-06-02 14:12:41.658155 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:12:41.658944 | orchestrator | 2025-06-02 14:12:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 14:12:41.658974 | orchestrator | 2025-06-02 14:12:41 | INFO  | Please wait and do not abort execution. 2025-06-02 14:12:41.659606 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.660549 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.661265 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.662006 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.662686 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.663237 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.663638 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:12:41.664139 | orchestrator | 2025-06-02 14:12:41.664618 | orchestrator | 2025-06-02 14:12:41.665042 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:12:41.665542 | orchestrator | Monday 02 June 2025 14:12:41 +0000 (0:00:00.538) 0:00:08.017 *********** 2025-06-02 14:12:41.665797 | orchestrator | =============================================================================== 2025-06-02 14:12:41.666218 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-06-02 14:12:41.666567 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.45s 2025-06-02 14:12:41.667137 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-06-02 14:12:41.667336 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-02 14:12:42.245384 | orchestrator | 2025-06-02 14:12:42.248505 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 14:12:42 UTC 2025 2025-06-02 14:12:42.248578 | orchestrator | 2025-06-02 14:12:43.948066 | orchestrator | 2025-06-02 14:12:43 | INFO  | Collection nutshell is prepared for execution 2025-06-02 14:12:43.948170 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [0] - dotfiles 2025-06-02 14:12:43.953310 | orchestrator | Registering Redlock._acquired_script 2025-06-02 14:12:43.953354 | orchestrator | Registering Redlock._extend_script 2025-06-02 14:12:43.953366 | orchestrator | Registering Redlock._release_script 2025-06-02 14:12:43.958357 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [0] - homer 2025-06-02 14:12:43.958411 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [0] - netdata 2025-06-02 14:12:43.958423 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [0] - openstackclient 2025-06-02 14:12:43.958435 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [0] - phpmyadmin 2025-06-02 14:12:43.958446 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [0] - common 2025-06-02 14:12:43.960420 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [1] -- loadbalancer 2025-06-02 14:12:43.960501 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [2] --- opensearch 2025-06-02 14:12:43.960514 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [2] --- mariadb-ng 2025-06-02 14:12:43.960622 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [3] ---- horizon 2025-06-02 14:12:43.960637 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [3] ---- keystone 2025-06-02 14:12:43.960646 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [4] ----- neutron 2025-06-02 14:12:43.960656 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ wait-for-nova 2025-06-02 14:12:43.960665 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [5] ------ octavia 2025-06-02 14:12:43.961206 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- barbican 2025-06-02 14:12:43.961370 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- designate 2025-06-02 14:12:43.961389 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- ironic 2025-06-02 14:12:43.962206 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- placement 2025-06-02 14:12:43.962235 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- magnum 2025-06-02 14:12:43.962247 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [1] -- openvswitch 2025-06-02 14:12:43.962368 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [2] --- ovn 2025-06-02 14:12:43.962386 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [1] -- memcached 2025-06-02 14:12:43.962443 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [1] -- redis 2025-06-02 14:12:43.962457 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 14:12:43.962798 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [0] - kubernetes 2025-06-02 14:12:43.964225 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [1] -- kubeconfig 2025-06-02 14:12:43.964261 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 14:12:43.964337 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [0] - ceph 2025-06-02 14:12:43.966185 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [1] -- ceph-pools 2025-06-02 14:12:43.966241 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 14:12:43.966262 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [3] ---- cephclient 2025-06-02 14:12:43.966280 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 14:12:43.966301 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 14:12:43.966443 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 14:12:43.966471 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ glance 2025-06-02 14:12:43.966546 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ cinder 2025-06-02 14:12:43.966569 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ nova 2025-06-02 14:12:43.966695 | orchestrator | 2025-06-02 14:12:43 | INFO  | A [4] ----- prometheus 2025-06-02 14:12:43.966714 | orchestrator | 2025-06-02 14:12:43 | INFO  | D [5] ------ grafana 2025-06-02 14:12:44.152542 | orchestrator | 2025-06-02 14:12:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 14:12:44.152638 | orchestrator | 2025-06-02 14:12:44 | INFO  | Tasks are running in the background 2025-06-02 14:12:46.542584 | orchestrator | 2025-06-02 14:12:46 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 14:12:48.656578 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:12:48.656815 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:12:48.658412 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:12:48.658793 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:12:48.661259 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:12:48.661690 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:12:48.662327 | orchestrator | 2025-06-02 14:12:48 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:12:48.662397 | orchestrator | 2025-06-02 14:12:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:12:51.721593 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:12:51.723229 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:12:51.725324 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:12:51.728824 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:12:51.732658 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:12:51.734548 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:12:51.739282 | orchestrator | 2025-06-02 14:12:51 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:12:51.739320 | orchestrator | 2025-06-02 14:12:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:12:54.768259 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:12:54.768416 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:12:54.768861 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:12:54.772869 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:12:54.773333 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:12:54.773812 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:12:54.774489 | orchestrator | 2025-06-02 14:12:54 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:12:54.774515 | orchestrator | 2025-06-02 14:12:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:12:57.812137 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:12:57.812227 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:12:57.815128 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:12:57.815200 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:12:57.815213 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:12:57.815225 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:12:57.817904 | orchestrator | 2025-06-02 14:12:57 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:12:57.817947 | orchestrator | 2025-06-02 14:12:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:00.859221 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:00.859618 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:13:00.864082 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:00.864121 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:00.864133 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:00.869869 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:00.869909 | orchestrator | 2025-06-02 14:13:00 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:00.869921 | orchestrator | 2025-06-02 14:13:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:03.952217 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:03.952314 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:13:03.952376 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:03.952389 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:03.952401 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:03.954119 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:03.957755 | orchestrator | 2025-06-02 14:13:03 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:03.957781 | orchestrator | 2025-06-02 14:13:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:07.012823 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:07.012962 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:13:07.013504 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:07.015272 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:07.018865 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:07.018932 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:07.019606 | orchestrator | 2025-06-02 14:13:07 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:07.019641 | orchestrator | 2025-06-02 14:13:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:10.076253 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:10.076388 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state STARTED 2025-06-02 14:13:10.076405 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:10.076422 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:10.076517 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:10.076770 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:10.081018 | orchestrator | 2025-06-02 14:13:10 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:10.081051 | orchestrator | 2025-06-02 14:13:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:13.137607 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:13.138762 | orchestrator | 2025-06-02 14:13:13.138799 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 14:13:13.138812 | orchestrator | 2025-06-02 14:13:13.138824 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 14:13:13.138913 | orchestrator | Monday 02 June 2025 14:12:55 +0000 (0:00:00.741) 0:00:00.741 *********** 2025-06-02 14:13:13.138932 | orchestrator | changed: [testbed-manager] 2025-06-02 14:13:13.138944 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:13:13.138955 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:13:13.138966 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:13:13.138977 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:13:13.138988 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:13:13.138998 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:13:13.139009 | orchestrator | 2025-06-02 14:13:13.139020 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 14:13:13.139032 | orchestrator | Monday 02 June 2025 14:13:00 +0000 (0:00:04.815) 0:00:05.556 *********** 2025-06-02 14:13:13.139043 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 14:13:13.139055 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 14:13:13.139066 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 14:13:13.139077 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 14:13:13.139088 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 14:13:13.139099 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 14:13:13.139110 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 14:13:13.139121 | orchestrator | 2025-06-02 14:13:13.139132 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 14:13:13.139143 | orchestrator | Monday 02 June 2025 14:13:02 +0000 (0:00:01.433) 0:00:06.990 *********** 2025-06-02 14:13:13.139180 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.455160', 'end': '2025-06-02 14:13:01.464154', 'delta': '0:00:00.008994', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139197 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.318295', 'end': '2025-06-02 14:13:01.323426', 'delta': '0:00:00.005131', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139209 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.226504', 'end': '2025-06-02 14:13:01.230303', 'delta': '0:00:00.003799', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139246 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.523042', 'end': '2025-06-02 14:13:01.531452', 'delta': '0:00:00.008410', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139259 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.655559', 'end': '2025-06-02 14:13:01.662868', 'delta': '0:00:00.007309', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139278 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.741938', 'end': '2025-06-02 14:13:01.752032', 'delta': '0:00:00.010094', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139289 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 14:13:01.888979', 'end': '2025-06-02 14:13:01.896060', 'delta': '0:00:00.007081', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 14:13:13.139301 | orchestrator | 2025-06-02 14:13:13.139312 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 14:13:13.139323 | orchestrator | Monday 02 June 2025 14:13:05 +0000 (0:00:02.815) 0:00:09.805 *********** 2025-06-02 14:13:13.139334 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 14:13:13.139352 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 14:13:13.139371 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 14:13:13.139389 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 14:13:13.139407 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 14:13:13.139428 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 14:13:13.139450 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 14:13:13.139471 | orchestrator | 2025-06-02 14:13:13.139486 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 14:13:13.139499 | orchestrator | Monday 02 June 2025 14:13:07 +0000 (0:00:02.471) 0:00:12.277 *********** 2025-06-02 14:13:13.139510 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 14:13:13.139521 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 14:13:13.139532 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 14:13:13.139543 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 14:13:13.139553 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 14:13:13.139564 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 14:13:13.139575 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 14:13:13.139585 | orchestrator | 2025-06-02 14:13:13.139596 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:13:13.139616 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139634 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139658 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139670 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139681 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139692 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139703 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:13:13.139714 | orchestrator | 2025-06-02 14:13:13.139725 | orchestrator | 2025-06-02 14:13:13.139735 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:13:13.139747 | orchestrator | Monday 02 June 2025 14:13:11 +0000 (0:00:03.751) 0:00:16.029 *********** 2025-06-02 14:13:13.139758 | orchestrator | =============================================================================== 2025-06-02 14:13:13.139768 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.82s 2025-06-02 14:13:13.139779 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.75s 2025-06-02 14:13:13.139790 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.81s 2025-06-02 14:13:13.139801 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.47s 2025-06-02 14:13:13.139812 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.43s 2025-06-02 14:13:13.139875 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task d3149740-e5da-4072-8e85-96a64ff10843 is in state SUCCESS 2025-06-02 14:13:13.139891 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:13.139966 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:13.140411 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:13.140668 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:13.141156 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:13.143178 | orchestrator | 2025-06-02 14:13:13 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:13.143258 | orchestrator | 2025-06-02 14:13:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:16.198687 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:16.199756 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:16.201761 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:16.203063 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:16.204309 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:16.206510 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:16.208762 | orchestrator | 2025-06-02 14:13:16 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:16.209412 | orchestrator | 2025-06-02 14:13:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:19.259381 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:19.262318 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:19.262373 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:19.262385 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:19.263350 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:19.264987 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:19.267770 | orchestrator | 2025-06-02 14:13:19 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:19.267806 | orchestrator | 2025-06-02 14:13:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:22.350361 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:22.353306 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:22.362090 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:22.372228 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:22.372289 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:22.375363 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:22.381711 | orchestrator | 2025-06-02 14:13:22 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:22.385218 | orchestrator | 2025-06-02 14:13:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:25.447294 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:25.447682 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:25.450566 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:25.450600 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:25.450612 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:25.450765 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:25.452585 | orchestrator | 2025-06-02 14:13:25 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:25.452608 | orchestrator | 2025-06-02 14:13:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:28.490363 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:28.490458 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:28.493623 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:28.503118 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:28.503316 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:28.504130 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:28.505903 | orchestrator | 2025-06-02 14:13:28 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:28.505929 | orchestrator | 2025-06-02 14:13:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:31.566334 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:31.569173 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:31.569204 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:31.569216 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:31.569227 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:31.571299 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state STARTED 2025-06-02 14:13:31.575015 | orchestrator | 2025-06-02 14:13:31 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:31.575048 | orchestrator | 2025-06-02 14:13:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:34.615942 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:34.616480 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:34.619191 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:34.619764 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:34.620680 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:34.623724 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task 72e1448b-a297-4ee7-bb7c-9056efd89d73 is in state SUCCESS 2025-06-02 14:13:34.624343 | orchestrator | 2025-06-02 14:13:34 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:34.624364 | orchestrator | 2025-06-02 14:13:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:37.658429 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:37.663551 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:37.666601 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:37.670328 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:37.673816 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:37.673877 | orchestrator | 2025-06-02 14:13:37 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:37.673889 | orchestrator | 2025-06-02 14:13:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:40.742720 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:40.743019 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:40.748202 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:40.748239 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:40.749938 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:40.752206 | orchestrator | 2025-06-02 14:13:40 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:40.752315 | orchestrator | 2025-06-02 14:13:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:43.803056 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:43.803148 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:43.804171 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state STARTED 2025-06-02 14:13:43.804194 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:43.805516 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:43.806182 | orchestrator | 2025-06-02 14:13:43 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:43.806771 | orchestrator | 2025-06-02 14:13:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:46.858264 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:46.863710 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:46.863742 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task b60057e7-6b43-4e4a-a23c-a6cc8af09b0a is in state SUCCESS 2025-06-02 14:13:46.865597 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:46.871635 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:46.871678 | orchestrator | 2025-06-02 14:13:46 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:46.871690 | orchestrator | 2025-06-02 14:13:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:49.925385 | orchestrator | 2025-06-02 14:13:49 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:49.926654 | orchestrator | 2025-06-02 14:13:49 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:49.927645 | orchestrator | 2025-06-02 14:13:49 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:49.928978 | orchestrator | 2025-06-02 14:13:49 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:49.932581 | orchestrator | 2025-06-02 14:13:49 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:49.932656 | orchestrator | 2025-06-02 14:13:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:52.983760 | orchestrator | 2025-06-02 14:13:52 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:52.984247 | orchestrator | 2025-06-02 14:13:52 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:52.984913 | orchestrator | 2025-06-02 14:13:52 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:52.985749 | orchestrator | 2025-06-02 14:13:52 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:52.988752 | orchestrator | 2025-06-02 14:13:52 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:52.988795 | orchestrator | 2025-06-02 14:13:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:56.053079 | orchestrator | 2025-06-02 14:13:56 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:56.054069 | orchestrator | 2025-06-02 14:13:56 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:56.054122 | orchestrator | 2025-06-02 14:13:56 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:56.054162 | orchestrator | 2025-06-02 14:13:56 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:56.054343 | orchestrator | 2025-06-02 14:13:56 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:56.054444 | orchestrator | [32m2025-06-02 14:13:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:13:59.097644 | orchestrator | 2025-06-02 14:13:59 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:13:59.097754 | orchestrator | 2025-06-02 14:13:59 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:13:59.098246 | orchestrator | 2025-06-02 14:13:59 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state STARTED 2025-06-02 14:13:59.100178 | orchestrator | 2025-06-02 14:13:59 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:13:59.101019 | orchestrator | 2025-06-02 14:13:59 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:13:59.101055 | orchestrator | 2025-06-02 14:13:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:02.139608 | orchestrator | 2025-06-02 14:14:02 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:02.139719 | orchestrator | 2025-06-02 14:14:02 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:02.139796 | orchestrator | 2025-06-02 14:14:02 | INFO  | Task ab3c58ba-8b23-4f3c-ad30-44e24ca9b6c5 is in state SUCCESS 2025-06-02 14:14:02.141085 | orchestrator | 2025-06-02 14:14:02.141176 | orchestrator | 2025-06-02 14:14:02.141193 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 14:14:02.141204 | orchestrator | 2025-06-02 14:14:02.141214 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 14:14:02.141224 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:00.566) 0:00:00.566 *********** 2025-06-02 14:14:02.141233 | orchestrator | ok: [testbed-manager] => { 2025-06-02 14:14:02.141244 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 14:14:02.141254 | orchestrator | } 2025-06-02 14:14:02.141264 | orchestrator | 2025-06-02 14:14:02.141272 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 14:14:02.141281 | orchestrator | Monday 02 June 2025 14:12:58 +0000 (0:00:00.486) 0:00:01.053 *********** 2025-06-02 14:14:02.141290 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.141300 | orchestrator | 2025-06-02 14:14:02.141309 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 14:14:02.141317 | orchestrator | Monday 02 June 2025 14:12:59 +0000 (0:00:01.613) 0:00:02.666 *********** 2025-06-02 14:14:02.141344 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 14:14:02.141354 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 14:14:02.141363 | orchestrator | 2025-06-02 14:14:02.141371 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 14:14:02.141380 | orchestrator | Monday 02 June 2025 14:13:01 +0000 (0:00:01.115) 0:00:03.781 *********** 2025-06-02 14:14:02.141389 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.141397 | orchestrator | 2025-06-02 14:14:02.141406 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 14:14:02.141421 | orchestrator | Monday 02 June 2025 14:13:02 +0000 (0:00:01.849) 0:00:05.631 *********** 2025-06-02 14:14:02.141437 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.141451 | orchestrator | 2025-06-02 14:14:02.141465 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 14:14:02.141480 | orchestrator | Monday 02 June 2025 14:13:04 +0000 (0:00:01.820) 0:00:07.452 *********** 2025-06-02 14:14:02.141495 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 14:14:02.141511 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.141524 | orchestrator | 2025-06-02 14:14:02.141538 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 14:14:02.141551 | orchestrator | Monday 02 June 2025 14:13:28 +0000 (0:00:24.116) 0:00:31.569 *********** 2025-06-02 14:14:02.141566 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.141580 | orchestrator | 2025-06-02 14:14:02.141595 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:14:02.141610 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.141627 | orchestrator | 2025-06-02 14:14:02.141642 | orchestrator | 2025-06-02 14:14:02.141659 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:14:02.141675 | orchestrator | Monday 02 June 2025 14:13:31 +0000 (0:00:02.182) 0:00:33.752 *********** 2025-06-02 14:14:02.141691 | orchestrator | =============================================================================== 2025-06-02 14:14:02.141709 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.12s 2025-06-02 14:14:02.141726 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.18s 2025-06-02 14:14:02.141742 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.85s 2025-06-02 14:14:02.141753 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.82s 2025-06-02 14:14:02.141763 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.61s 2025-06-02 14:14:02.141774 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.12s 2025-06-02 14:14:02.141785 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.49s 2025-06-02 14:14:02.141795 | orchestrator | 2025-06-02 14:14:02.141805 | orchestrator | 2025-06-02 14:14:02.141815 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 14:14:02.141825 | orchestrator | 2025-06-02 14:14:02.141860 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 14:14:02.141870 | orchestrator | Monday 02 June 2025 14:12:56 +0000 (0:00:00.820) 0:00:00.820 *********** 2025-06-02 14:14:02.141880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 14:14:02.141892 | orchestrator | 2025-06-02 14:14:02.141902 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 14:14:02.141912 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:00.864) 0:00:01.685 *********** 2025-06-02 14:14:02.141922 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 14:14:02.141932 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 14:14:02.141952 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 14:14:02.141962 | orchestrator | 2025-06-02 14:14:02.141972 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 14:14:02.141983 | orchestrator | Monday 02 June 2025 14:12:59 +0000 (0:00:01.848) 0:00:03.533 *********** 2025-06-02 14:14:02.141991 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142000 | orchestrator | 2025-06-02 14:14:02.142009 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 14:14:02.142087 | orchestrator | Monday 02 June 2025 14:13:01 +0000 (0:00:01.849) 0:00:05.383 *********** 2025-06-02 14:14:02.142122 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 14:14:02.142133 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.142141 | orchestrator | 2025-06-02 14:14:02.142150 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 14:14:02.142199 | orchestrator | Monday 02 June 2025 14:13:37 +0000 (0:00:36.362) 0:00:41.745 *********** 2025-06-02 14:14:02.142209 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142218 | orchestrator | 2025-06-02 14:14:02.142227 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 14:14:02.142236 | orchestrator | Monday 02 June 2025 14:13:38 +0000 (0:00:01.006) 0:00:42.752 *********** 2025-06-02 14:14:02.142245 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.142253 | orchestrator | 2025-06-02 14:14:02.142262 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 14:14:02.142271 | orchestrator | Monday 02 June 2025 14:13:40 +0000 (0:00:01.458) 0:00:44.211 *********** 2025-06-02 14:14:02.142280 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142288 | orchestrator | 2025-06-02 14:14:02.142297 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 14:14:02.142306 | orchestrator | Monday 02 June 2025 14:13:42 +0000 (0:00:02.823) 0:00:47.034 *********** 2025-06-02 14:14:02.142315 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142323 | orchestrator | 2025-06-02 14:14:02.142332 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 14:14:02.142341 | orchestrator | Monday 02 June 2025 14:13:44 +0000 (0:00:01.500) 0:00:48.534 *********** 2025-06-02 14:14:02.142350 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142358 | orchestrator | 2025-06-02 14:14:02.142367 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 14:14:02.142376 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.701) 0:00:49.236 *********** 2025-06-02 14:14:02.142384 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.142393 | orchestrator | 2025-06-02 14:14:02.142402 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:14:02.142410 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.142419 | orchestrator | 2025-06-02 14:14:02.142428 | orchestrator | 2025-06-02 14:14:02.142436 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:14:02.142445 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.353) 0:00:49.589 *********** 2025-06-02 14:14:02.142454 | orchestrator | =============================================================================== 2025-06-02 14:14:02.142462 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.36s 2025-06-02 14:14:02.142471 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.82s 2025-06-02 14:14:02.142480 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.85s 2025-06-02 14:14:02.142489 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.85s 2025-06-02 14:14:02.142497 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.50s 2025-06-02 14:14:02.142514 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.46s 2025-06-02 14:14:02.142522 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.01s 2025-06-02 14:14:02.142531 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.86s 2025-06-02 14:14:02.142540 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.70s 2025-06-02 14:14:02.142548 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2025-06-02 14:14:02.142557 | orchestrator | 2025-06-02 14:14:02.142566 | orchestrator | 2025-06-02 14:14:02.142574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:14:02.142583 | orchestrator | 2025-06-02 14:14:02.142591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:14:02.142600 | orchestrator | Monday 02 June 2025 14:12:55 +0000 (0:00:00.313) 0:00:00.313 *********** 2025-06-02 14:14:02.142608 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 14:14:02.142617 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 14:14:02.142626 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 14:14:02.142634 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 14:14:02.142643 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 14:14:02.142652 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 14:14:02.142660 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 14:14:02.142669 | orchestrator | 2025-06-02 14:14:02.142678 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 14:14:02.142686 | orchestrator | 2025-06-02 14:14:02.142695 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 14:14:02.142704 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:02.359) 0:00:02.672 *********** 2025-06-02 14:14:02.142724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:14:02.142735 | orchestrator | 2025-06-02 14:14:02.142744 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 14:14:02.142753 | orchestrator | Monday 02 June 2025 14:12:59 +0000 (0:00:02.451) 0:00:05.123 *********** 2025-06-02 14:14:02.142761 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:14:02.142770 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.142779 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:14:02.142788 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:14:02.142797 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:14:02.142817 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:14:02.142827 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:14:02.142851 | orchestrator | 2025-06-02 14:14:02.142860 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 14:14:02.142869 | orchestrator | Monday 02 June 2025 14:13:01 +0000 (0:00:01.872) 0:00:06.996 *********** 2025-06-02 14:14:02.142878 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:14:02.142886 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:14:02.142895 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.142904 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:14:02.142912 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:14:02.142921 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:14:02.142930 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:14:02.142939 | orchestrator | 2025-06-02 14:14:02.142948 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 14:14:02.142956 | orchestrator | Monday 02 June 2025 14:13:05 +0000 (0:00:04.079) 0:00:11.075 *********** 2025-06-02 14:14:02.142965 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.142974 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:14:02.142983 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:14:02.143003 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:14:02.143012 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:14:02.143020 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:14:02.143029 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:14:02.143038 | orchestrator | 2025-06-02 14:14:02.143047 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 14:14:02.143055 | orchestrator | Monday 02 June 2025 14:13:09 +0000 (0:00:03.122) 0:00:14.198 *********** 2025-06-02 14:14:02.143064 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.143073 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:14:02.143081 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:14:02.143090 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:14:02.143099 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:14:02.143107 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:14:02.143116 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:14:02.143124 | orchestrator | 2025-06-02 14:14:02.143133 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 14:14:02.143142 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:09.931) 0:00:24.129 *********** 2025-06-02 14:14:02.143151 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:14:02.143159 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.143168 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:14:02.143177 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:14:02.143185 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:14:02.143194 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:14:02.143203 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:14:02.143211 | orchestrator | 2025-06-02 14:14:02.143220 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 14:14:02.143229 | orchestrator | Monday 02 June 2025 14:13:36 +0000 (0:00:17.992) 0:00:42.122 *********** 2025-06-02 14:14:02.143238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:14:02.143249 | orchestrator | 2025-06-02 14:14:02.143258 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 14:14:02.143267 | orchestrator | Monday 02 June 2025 14:13:38 +0000 (0:00:01.807) 0:00:43.930 *********** 2025-06-02 14:14:02.143275 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 14:14:02.143284 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 14:14:02.143293 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 14:14:02.143303 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 14:14:02.143311 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 14:14:02.143320 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 14:14:02.143329 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 14:14:02.143337 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 14:14:02.143346 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 14:14:02.143355 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 14:14:02.143363 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 14:14:02.143372 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 14:14:02.143381 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 14:14:02.143390 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 14:14:02.143399 | orchestrator | 2025-06-02 14:14:02.143408 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 14:14:02.143417 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:07.112) 0:00:51.042 *********** 2025-06-02 14:14:02.143426 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.143435 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:14:02.143448 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:14:02.143471 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:14:02.143480 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:14:02.143489 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:14:02.143497 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:14:02.143506 | orchestrator | 2025-06-02 14:14:02.143515 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 14:14:02.143524 | orchestrator | Monday 02 June 2025 14:13:47 +0000 (0:00:01.510) 0:00:52.552 *********** 2025-06-02 14:14:02.143533 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.143542 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:14:02.143551 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:14:02.143559 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:14:02.143568 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:14:02.143577 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:14:02.143585 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:14:02.143594 | orchestrator | 2025-06-02 14:14:02.143603 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 14:14:02.143618 | orchestrator | Monday 02 June 2025 14:13:49 +0000 (0:00:01.775) 0:00:54.328 *********** 2025-06-02 14:14:02.143628 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.143636 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:14:02.143647 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:14:02.143662 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:14:02.143676 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:14:02.143691 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:14:02.143705 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:14:02.143719 | orchestrator | 2025-06-02 14:14:02.143733 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 14:14:02.143746 | orchestrator | Monday 02 June 2025 14:13:50 +0000 (0:00:01.197) 0:00:55.526 *********** 2025-06-02 14:14:02.143759 | orchestrator | ok: [testbed-manager] 2025-06-02 14:14:02.143774 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:14:02.143788 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:14:02.143802 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:14:02.143816 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:14:02.143849 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:14:02.143865 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:14:02.143879 | orchestrator | 2025-06-02 14:14:02.143888 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 14:14:02.143897 | orchestrator | Monday 02 June 2025 14:13:52 +0000 (0:00:02.142) 0:00:57.668 *********** 2025-06-02 14:14:02.143906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 14:14:02.143917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:14:02.143926 | orchestrator | 2025-06-02 14:14:02.143966 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 14:14:02.143976 | orchestrator | Monday 02 June 2025 14:13:54 +0000 (0:00:01.556) 0:00:59.225 *********** 2025-06-02 14:14:02.143985 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.143994 | orchestrator | 2025-06-02 14:14:02.144002 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 14:14:02.144011 | orchestrator | Monday 02 June 2025 14:13:56 +0000 (0:00:02.264) 0:01:01.490 *********** 2025-06-02 14:14:02.144020 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:14:02.144028 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:14:02.144037 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:14:02.144046 | orchestrator | changed: [testbed-manager] 2025-06-02 14:14:02.144055 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:14:02.144063 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:14:02.144074 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:14:02.144089 | orchestrator | 2025-06-02 14:14:02.144113 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:14:02.144130 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144142 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144151 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144160 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144169 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144178 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144187 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:14:02.144195 | orchestrator | 2025-06-02 14:14:02.144204 | orchestrator | 2025-06-02 14:14:02.144213 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:14:02.144224 | orchestrator | Monday 02 June 2025 14:14:00 +0000 (0:00:04.180) 0:01:05.671 *********** 2025-06-02 14:14:02.144235 | orchestrator | =============================================================================== 2025-06-02 14:14:02.144246 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.99s 2025-06-02 14:14:02.144257 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.93s 2025-06-02 14:14:02.144268 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.11s 2025-06-02 14:14:02.144278 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.18s 2025-06-02 14:14:02.144289 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.08s 2025-06-02 14:14:02.144300 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.12s 2025-06-02 14:14:02.144311 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.45s 2025-06-02 14:14:02.144322 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.36s 2025-06-02 14:14:02.144332 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.26s 2025-06-02 14:14:02.144344 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.14s 2025-06-02 14:14:02.144354 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.87s 2025-06-02 14:14:02.144380 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.81s 2025-06-02 14:14:02.144391 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.78s 2025-06-02 14:14:02.144402 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.56s 2025-06-02 14:14:02.144413 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.51s 2025-06-02 14:14:02.144424 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.20s 2025-06-02 14:14:02.144435 | orchestrator | 2025-06-02 14:14:02 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:02.144446 | orchestrator | 2025-06-02 14:14:02 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:02.144457 | orchestrator | 2025-06-02 14:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:05.184979 | orchestrator | 2025-06-02 14:14:05 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:05.189547 | orchestrator | 2025-06-02 14:14:05 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:05.191443 | orchestrator | 2025-06-02 14:14:05 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:05.194313 | orchestrator | 2025-06-02 14:14:05 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:05.194518 | orchestrator | 2025-06-02 14:14:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:08.249699 | orchestrator | 2025-06-02 14:14:08 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:08.251252 | orchestrator | 2025-06-02 14:14:08 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:08.252897 | orchestrator | 2025-06-02 14:14:08 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:08.254241 | orchestrator | 2025-06-02 14:14:08 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:08.254704 | orchestrator | 2025-06-02 14:14:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:11.293393 | orchestrator | 2025-06-02 14:14:11 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:11.295159 | orchestrator | 2025-06-02 14:14:11 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:11.296372 | orchestrator | 2025-06-02 14:14:11 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:11.298267 | orchestrator | 2025-06-02 14:14:11 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:11.298303 | orchestrator | 2025-06-02 14:14:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:14.341113 | orchestrator | 2025-06-02 14:14:14 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:14.342647 | orchestrator | 2025-06-02 14:14:14 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:14.344399 | orchestrator | 2025-06-02 14:14:14 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:14.346741 | orchestrator | 2025-06-02 14:14:14 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:14.346769 | orchestrator | 2025-06-02 14:14:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:17.393440 | orchestrator | 2025-06-02 14:14:17 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:17.393549 | orchestrator | 2025-06-02 14:14:17 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:17.394646 | orchestrator | 2025-06-02 14:14:17 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:17.394675 | orchestrator | 2025-06-02 14:14:17 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:17.394687 | orchestrator | 2025-06-02 14:14:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:20.439343 | orchestrator | 2025-06-02 14:14:20 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:20.441461 | orchestrator | 2025-06-02 14:14:20 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:20.443176 | orchestrator | 2025-06-02 14:14:20 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:20.444986 | orchestrator | 2025-06-02 14:14:20 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:20.445052 | orchestrator | 2025-06-02 14:14:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:23.496977 | orchestrator | 2025-06-02 14:14:23 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:23.497157 | orchestrator | 2025-06-02 14:14:23 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:23.498577 | orchestrator | 2025-06-02 14:14:23 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:23.499105 | orchestrator | 2025-06-02 14:14:23 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:23.499139 | orchestrator | 2025-06-02 14:14:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:26.553707 | orchestrator | 2025-06-02 14:14:26 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:26.554105 | orchestrator | 2025-06-02 14:14:26 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:26.555720 | orchestrator | 2025-06-02 14:14:26 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:26.557004 | orchestrator | 2025-06-02 14:14:26 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:26.557034 | orchestrator | 2025-06-02 14:14:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:29.603336 | orchestrator | 2025-06-02 14:14:29 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:29.603606 | orchestrator | 2025-06-02 14:14:29 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:29.606007 | orchestrator | 2025-06-02 14:14:29 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:29.609804 | orchestrator | 2025-06-02 14:14:29 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:29.609921 | orchestrator | 2025-06-02 14:14:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:32.651803 | orchestrator | 2025-06-02 14:14:32 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:32.652002 | orchestrator | 2025-06-02 14:14:32 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:32.652105 | orchestrator | 2025-06-02 14:14:32 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:32.653280 | orchestrator | 2025-06-02 14:14:32 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:32.653317 | orchestrator | 2025-06-02 14:14:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:35.692509 | orchestrator | 2025-06-02 14:14:35 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:35.693943 | orchestrator | 2025-06-02 14:14:35 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:35.697167 | orchestrator | 2025-06-02 14:14:35 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state STARTED 2025-06-02 14:14:35.697241 | orchestrator | 2025-06-02 14:14:35 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:35.698774 | orchestrator | 2025-06-02 14:14:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:38.745768 | orchestrator | 2025-06-02 14:14:38 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:38.747387 | orchestrator | 2025-06-02 14:14:38 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:38.748224 | orchestrator | 2025-06-02 14:14:38 | INFO  | Task 76970590-bf84-4720-8d12-905d80ec09b6 is in state SUCCESS 2025-06-02 14:14:38.750731 | orchestrator | 2025-06-02 14:14:38 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:38.751104 | orchestrator | 2025-06-02 14:14:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:41.810537 | orchestrator | 2025-06-02 14:14:41 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:41.810888 | orchestrator | 2025-06-02 14:14:41 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:41.811667 | orchestrator | 2025-06-02 14:14:41 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:41.811695 | orchestrator | 2025-06-02 14:14:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:44.868652 | orchestrator | 2025-06-02 14:14:44 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:44.874687 | orchestrator | 2025-06-02 14:14:44 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:44.878550 | orchestrator | 2025-06-02 14:14:44 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:44.879389 | orchestrator | 2025-06-02 14:14:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:47.932647 | orchestrator | 2025-06-02 14:14:47 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:47.934256 | orchestrator | 2025-06-02 14:14:47 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:47.935316 | orchestrator | 2025-06-02 14:14:47 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:47.935478 | orchestrator | 2025-06-02 14:14:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:50.976034 | orchestrator | 2025-06-02 14:14:50 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:50.979099 | orchestrator | 2025-06-02 14:14:50 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:50.979779 | orchestrator | 2025-06-02 14:14:50 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:50.979936 | orchestrator | 2025-06-02 14:14:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:54.021291 | orchestrator | 2025-06-02 14:14:54 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:54.023248 | orchestrator | 2025-06-02 14:14:54 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:54.025072 | orchestrator | 2025-06-02 14:14:54 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:54.025478 | orchestrator | 2025-06-02 14:14:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:14:57.065933 | orchestrator | 2025-06-02 14:14:57 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:14:57.069458 | orchestrator | 2025-06-02 14:14:57 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:14:57.072197 | orchestrator | 2025-06-02 14:14:57 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:14:57.072237 | orchestrator | 2025-06-02 14:14:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:00.105587 | orchestrator | 2025-06-02 14:15:00 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:00.106347 | orchestrator | 2025-06-02 14:15:00 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:00.107450 | orchestrator | 2025-06-02 14:15:00 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:00.107501 | orchestrator | 2025-06-02 14:15:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:03.160210 | orchestrator | 2025-06-02 14:15:03 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:03.162752 | orchestrator | 2025-06-02 14:15:03 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:03.164550 | orchestrator | 2025-06-02 14:15:03 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:03.164951 | orchestrator | 2025-06-02 14:15:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:06.212784 | orchestrator | 2025-06-02 14:15:06 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:06.213080 | orchestrator | 2025-06-02 14:15:06 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:06.214159 | orchestrator | 2025-06-02 14:15:06 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:06.214193 | orchestrator | 2025-06-02 14:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:09.257428 | orchestrator | 2025-06-02 14:15:09 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:09.259481 | orchestrator | 2025-06-02 14:15:09 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:09.262166 | orchestrator | 2025-06-02 14:15:09 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:09.262884 | orchestrator | 2025-06-02 14:15:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:12.308728 | orchestrator | 2025-06-02 14:15:12 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:12.313235 | orchestrator | 2025-06-02 14:15:12 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:12.313293 | orchestrator | 2025-06-02 14:15:12 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:12.313306 | orchestrator | 2025-06-02 14:15:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:15.362578 | orchestrator | 2025-06-02 14:15:15 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:15.362705 | orchestrator | 2025-06-02 14:15:15 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:15.364459 | orchestrator | 2025-06-02 14:15:15 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:15.364483 | orchestrator | 2025-06-02 14:15:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:18.437011 | orchestrator | 2025-06-02 14:15:18 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:18.437810 | orchestrator | 2025-06-02 14:15:18 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:18.438677 | orchestrator | 2025-06-02 14:15:18 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:18.438722 | orchestrator | 2025-06-02 14:15:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:21.481263 | orchestrator | 2025-06-02 14:15:21 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:21.481394 | orchestrator | 2025-06-02 14:15:21 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:21.481977 | orchestrator | 2025-06-02 14:15:21 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state STARTED 2025-06-02 14:15:21.482092 | orchestrator | 2025-06-02 14:15:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:24.523979 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:24.524420 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:24.525131 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:24.526001 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:24.526786 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:24.527667 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:24.531214 | orchestrator | 2025-06-02 14:15:24 | INFO  | Task 04e6a61b-a858-401a-a41e-1c23acb06ec1 is in state SUCCESS 2025-06-02 14:15:24.533718 | orchestrator | 2025-06-02 14:15:24.533773 | orchestrator | 2025-06-02 14:15:24.533787 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 14:15:24.533803 | orchestrator | 2025-06-02 14:15:24.533820 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 14:15:24.533887 | orchestrator | Monday 02 June 2025 14:13:17 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-02 14:15:24.533910 | orchestrator | ok: [testbed-manager] 2025-06-02 14:15:24.533930 | orchestrator | 2025-06-02 14:15:24.533942 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 14:15:24.533954 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:00.854) 0:00:01.112 *********** 2025-06-02 14:15:24.533965 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 14:15:24.533977 | orchestrator | 2025-06-02 14:15:24.533988 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 14:15:24.533999 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:00.558) 0:00:01.671 *********** 2025-06-02 14:15:24.534010 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.534099 | orchestrator | 2025-06-02 14:15:24.534120 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 14:15:24.534141 | orchestrator | Monday 02 June 2025 14:13:20 +0000 (0:00:02.146) 0:00:03.818 *********** 2025-06-02 14:15:24.534162 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 14:15:24.534183 | orchestrator | ok: [testbed-manager] 2025-06-02 14:15:24.534196 | orchestrator | 2025-06-02 14:15:24.534207 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 14:15:24.534218 | orchestrator | Monday 02 June 2025 14:14:26 +0000 (0:01:05.777) 0:01:09.595 *********** 2025-06-02 14:15:24.534229 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.534240 | orchestrator | 2025-06-02 14:15:24.534251 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:15:24.534263 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:15:24.534276 | orchestrator | 2025-06-02 14:15:24.534289 | orchestrator | 2025-06-02 14:15:24.534302 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:15:24.534323 | orchestrator | Monday 02 June 2025 14:14:35 +0000 (0:00:08.899) 0:01:18.495 *********** 2025-06-02 14:15:24.534336 | orchestrator | =============================================================================== 2025-06-02 14:15:24.534349 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 65.78s 2025-06-02 14:15:24.534362 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.90s 2025-06-02 14:15:24.534374 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.15s 2025-06-02 14:15:24.534408 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.85s 2025-06-02 14:15:24.534421 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-06-02 14:15:24.534433 | orchestrator | 2025-06-02 14:15:24.534446 | orchestrator | 2025-06-02 14:15:24.534459 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 14:15:24.534471 | orchestrator | 2025-06-02 14:15:24.534484 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 14:15:24.534497 | orchestrator | Monday 02 June 2025 14:12:48 +0000 (0:00:00.243) 0:00:00.243 *********** 2025-06-02 14:15:24.534510 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:15:24.534523 | orchestrator | 2025-06-02 14:15:24.534536 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 14:15:24.534549 | orchestrator | Monday 02 June 2025 14:12:49 +0000 (0:00:01.298) 0:00:01.541 *********** 2025-06-02 14:15:24.534561 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534574 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534587 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534599 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534611 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534624 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534636 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534649 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534659 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534670 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 14:15:24.534681 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534693 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534704 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534715 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534725 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534736 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534763 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534775 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 14:15:24.534786 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534797 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534808 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 14:15:24.534820 | orchestrator | 2025-06-02 14:15:24.534854 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 14:15:24.534867 | orchestrator | Monday 02 June 2025 14:12:54 +0000 (0:00:04.643) 0:00:06.184 *********** 2025-06-02 14:15:24.534878 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:15:24.534890 | orchestrator | 2025-06-02 14:15:24.534910 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 14:15:24.534921 | orchestrator | Monday 02 June 2025 14:12:55 +0000 (0:00:01.458) 0:00:07.642 *********** 2025-06-02 14:15:24.534937 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.534953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.534965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.534977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.534989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.535008 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.535086 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.535114 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.535292 | orchestrator | 2025-06-02 14:15:24.535303 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 14:15:24.535314 | orchestrator | Monday 02 June 2025 14:13:01 +0000 (0:00:05.478) 0:00:13.121 *********** 2025-06-02 14:15:24.535332 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535414 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:15:24.535426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535476 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:15:24.535488 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:15:24.535500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:15:24.535551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535633 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:15:24.535644 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:15:24.535660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:15:24.535706 | orchestrator | 2025-06-02 14:15:24.535717 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 14:15:24.535729 | orchestrator | Monday 02 June 2025 14:13:02 +0000 (0:00:00.909) 0:00:14.030 *********** 2025-06-02 14:15:24.535740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535826 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:15:24.535864 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:15:24.535876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.535888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.535917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:15:24.535929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.536422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.536474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:15:24.536508 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:15:24.536520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.536540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536578 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:15:24.536589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 14:15:24.536601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.536629 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:15:24.536640 | orchestrator | 2025-06-02 14:15:24.536651 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 14:15:24.536663 | orchestrator | Monday 02 June 2025 14:13:05 +0000 (0:00:03.333) 0:00:17.364 *********** 2025-06-02 14:15:24.536674 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:15:24.536685 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:15:24.536696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:15:24.536707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:15:24.536718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:15:24.536728 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:15:24.536739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:15:24.536750 | orchestrator | 2025-06-02 14:15:24.536767 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 14:15:24.536778 | orchestrator | Monday 02 June 2025 14:13:06 +0000 (0:00:01.066) 0:00:18.430 *********** 2025-06-02 14:15:24.536789 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:15:24.536800 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:15:24.536811 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:15:24.536822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:15:24.536857 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:15:24.536869 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:15:24.536880 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:15:24.536891 | orchestrator | 2025-06-02 14:15:24.536902 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 14:15:24.536912 | orchestrator | Monday 02 June 2025 14:13:08 +0000 (0:00:01.312) 0:00:19.743 *********** 2025-06-02 14:15:24.536924 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.536935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.536954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.536966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.536977 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.536994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.537028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537041 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.537054 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.537101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.537238 | orchestrator | 2025-06-02 14:15:24.537250 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 14:15:24.537261 | orchestrator | Monday 02 June 2025 14:13:14 +0000 (0:00:06.400) 0:00:26.144 *********** 2025-06-02 14:15:24.537272 | orchestrator | [WARNING]: Skipped 2025-06-02 14:15:24.537288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 14:15:24.537299 | orchestrator | to this access issue: 2025-06-02 14:15:24.537310 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 14:15:24.537321 | orchestrator | directory 2025-06-02 14:15:24.537332 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:15:24.537343 | orchestrator | 2025-06-02 14:15:24.537354 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 14:15:24.537365 | orchestrator | Monday 02 June 2025 14:13:15 +0000 (0:00:01.417) 0:00:27.561 *********** 2025-06-02 14:15:24.537376 | orchestrator | [WARNING]: Skipped 2025-06-02 14:15:24.537387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 14:15:24.537398 | orchestrator | to this access issue: 2025-06-02 14:15:24.537409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 14:15:24.537419 | orchestrator | directory 2025-06-02 14:15:24.537430 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:15:24.537441 | orchestrator | 2025-06-02 14:15:24.537452 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 14:15:24.537463 | orchestrator | Monday 02 June 2025 14:13:16 +0000 (0:00:01.058) 0:00:28.620 *********** 2025-06-02 14:15:24.537474 | orchestrator | [WARNING]: Skipped 2025-06-02 14:15:24.537484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 14:15:24.537495 | orchestrator | to this access issue: 2025-06-02 14:15:24.537506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 14:15:24.537517 | orchestrator | directory 2025-06-02 14:15:24.537527 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:15:24.537538 | orchestrator | 2025-06-02 14:15:24.537549 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 14:15:24.537560 | orchestrator | Monday 02 June 2025 14:13:17 +0000 (0:00:00.942) 0:00:29.562 *********** 2025-06-02 14:15:24.537571 | orchestrator | [WARNING]: Skipped 2025-06-02 14:15:24.537581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 14:15:24.537592 | orchestrator | to this access issue: 2025-06-02 14:15:24.537603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 14:15:24.537614 | orchestrator | directory 2025-06-02 14:15:24.537624 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:15:24.537635 | orchestrator | 2025-06-02 14:15:24.537646 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 14:15:24.537657 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:00.772) 0:00:30.335 *********** 2025-06-02 14:15:24.537668 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.537679 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.537690 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.537700 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.537711 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.537722 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.537733 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.537744 | orchestrator | 2025-06-02 14:15:24.537755 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 14:15:24.537766 | orchestrator | Monday 02 June 2025 14:13:24 +0000 (0:00:05.615) 0:00:35.950 *********** 2025-06-02 14:15:24.537777 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537806 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537822 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537888 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537899 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 14:15:24.537910 | orchestrator | 2025-06-02 14:15:24.537921 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 14:15:24.537932 | orchestrator | Monday 02 June 2025 14:13:27 +0000 (0:00:02.856) 0:00:38.806 *********** 2025-06-02 14:15:24.537943 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.537953 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.537964 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.537975 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.537986 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.537997 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.538007 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.538071 | orchestrator | 2025-06-02 14:15:24.538083 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 14:15:24.538094 | orchestrator | Monday 02 June 2025 14:13:30 +0000 (0:00:03.389) 0:00:42.196 *********** 2025-06-02 14:15:24.538105 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538135 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538190 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538202 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538254 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538265 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538315 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538343 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:15:24.538366 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538385 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538396 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538408 | orchestrator | 2025-06-02 14:15:24.538419 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 14:15:24.538430 | orchestrator | Monday 02 June 2025 14:13:33 +0000 (0:00:03.218) 0:00:45.415 *********** 2025-06-02 14:15:24.538441 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538452 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538463 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538486 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538505 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538524 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538541 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 14:15:24.538561 | orchestrator | 2025-06-02 14:15:24.538575 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 14:15:24.538585 | orchestrator | Monday 02 June 2025 14:13:35 +0000 (0:00:02.089) 0:00:47.504 *********** 2025-06-02 14:15:24.538596 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538618 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538629 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538639 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538650 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538661 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 14:15:24.538671 | orchestrator | 2025-06-02 14:15:24.538682 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 14:15:24.538693 | orchestrator | Monday 02 June 2025 14:13:37 +0000 (0:00:01.737) 0:00:49.242 *********** 2025-06-02 14:15:24.538709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538721 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538751 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538805 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 14:15:24.538925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.538998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.539009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.539021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:15:24.539032 | orchestrator | 2025-06-02 14:15:24.539049 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 14:15:24.539061 | orchestrator | Monday 02 June 2025 14:13:41 +0000 (0:00:03.849) 0:00:53.092 *********** 2025-06-02 14:15:24.539072 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.539083 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.539094 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.539105 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.539116 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.539126 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.539137 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.539148 | orchestrator | 2025-06-02 14:15:24.539159 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 14:15:24.539170 | orchestrator | Monday 02 June 2025 14:13:43 +0000 (0:00:02.243) 0:00:55.335 *********** 2025-06-02 14:15:24.539181 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.539192 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.539202 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.539213 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.539224 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.539234 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.539245 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.539256 | orchestrator | 2025-06-02 14:15:24.539273 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539284 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:01.895) 0:00:57.231 *********** 2025-06-02 14:15:24.539295 | orchestrator | 2025-06-02 14:15:24.539306 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539317 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.085) 0:00:57.316 *********** 2025-06-02 14:15:24.539328 | orchestrator | 2025-06-02 14:15:24.539339 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539350 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.079) 0:00:57.396 *********** 2025-06-02 14:15:24.539361 | orchestrator | 2025-06-02 14:15:24.539372 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539383 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.088) 0:00:57.485 *********** 2025-06-02 14:15:24.539394 | orchestrator | 2025-06-02 14:15:24.539405 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539420 | orchestrator | Monday 02 June 2025 14:13:45 +0000 (0:00:00.093) 0:00:57.579 *********** 2025-06-02 14:15:24.539431 | orchestrator | 2025-06-02 14:15:24.539442 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539454 | orchestrator | Monday 02 June 2025 14:13:46 +0000 (0:00:00.282) 0:00:57.861 *********** 2025-06-02 14:15:24.539464 | orchestrator | 2025-06-02 14:15:24.539475 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 14:15:24.539486 | orchestrator | Monday 02 June 2025 14:13:46 +0000 (0:00:00.081) 0:00:57.943 *********** 2025-06-02 14:15:24.539497 | orchestrator | 2025-06-02 14:15:24.539508 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 14:15:24.539519 | orchestrator | Monday 02 June 2025 14:13:46 +0000 (0:00:00.091) 0:00:58.035 *********** 2025-06-02 14:15:24.539530 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.539541 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.539552 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.539563 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.539574 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.539585 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.539595 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.539606 | orchestrator | 2025-06-02 14:15:24.539618 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 14:15:24.539629 | orchestrator | Monday 02 June 2025 14:14:28 +0000 (0:00:41.866) 0:01:39.901 *********** 2025-06-02 14:15:24.539640 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.539650 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.539661 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.539672 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.539683 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.539694 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.539704 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.539715 | orchestrator | 2025-06-02 14:15:24.539726 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 14:15:24.539738 | orchestrator | Monday 02 June 2025 14:15:10 +0000 (0:00:42.006) 0:02:21.908 *********** 2025-06-02 14:15:24.539749 | orchestrator | ok: [testbed-manager] 2025-06-02 14:15:24.539760 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:15:24.539771 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:15:24.539782 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:15:24.539792 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:15:24.539803 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:15:24.539814 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:15:24.539825 | orchestrator | 2025-06-02 14:15:24.539909 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 14:15:24.539930 | orchestrator | Monday 02 June 2025 14:15:12 +0000 (0:00:01.916) 0:02:23.825 *********** 2025-06-02 14:15:24.539950 | orchestrator | changed: [testbed-manager] 2025-06-02 14:15:24.539961 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:15:24.539972 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:15:24.539983 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:15:24.539994 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:15:24.540005 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:15:24.540015 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:15:24.540026 | orchestrator | 2025-06-02 14:15:24.540037 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:15:24.540049 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540060 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540079 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540090 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540101 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540112 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540123 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 14:15:24.540134 | orchestrator | 2025-06-02 14:15:24.540145 | orchestrator | 2025-06-02 14:15:24.540156 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:15:24.540167 | orchestrator | Monday 02 June 2025 14:15:21 +0000 (0:00:09.852) 0:02:33.677 *********** 2025-06-02 14:15:24.540177 | orchestrator | =============================================================================== 2025-06-02 14:15:24.540188 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.01s 2025-06-02 14:15:24.540199 | orchestrator | common : Restart fluentd container ------------------------------------- 41.87s 2025-06-02 14:15:24.540210 | orchestrator | common : Restart cron container ----------------------------------------- 9.85s 2025-06-02 14:15:24.540221 | orchestrator | common : Copying over config.json files for services -------------------- 6.40s 2025-06-02 14:15:24.540232 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.62s 2025-06-02 14:15:24.540242 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.48s 2025-06-02 14:15:24.540253 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.64s 2025-06-02 14:15:24.540270 | orchestrator | common : Check common containers ---------------------------------------- 3.85s 2025-06-02 14:15:24.540281 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.39s 2025-06-02 14:15:24.540292 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.33s 2025-06-02 14:15:24.540303 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.22s 2025-06-02 14:15:24.540314 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.86s 2025-06-02 14:15:24.540325 | orchestrator | common : Creating log volume -------------------------------------------- 2.24s 2025-06-02 14:15:24.540335 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.09s 2025-06-02 14:15:24.540346 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.92s 2025-06-02 14:15:24.540357 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.90s 2025-06-02 14:15:24.540368 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2025-06-02 14:15:24.540386 | orchestrator | common : include_tasks -------------------------------------------------- 1.46s 2025-06-02 14:15:24.540397 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.42s 2025-06-02 14:15:24.540408 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.32s 2025-06-02 14:15:24.540418 | orchestrator | 2025-06-02 14:15:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:27.574308 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:27.574502 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:27.575433 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:27.576161 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:27.579336 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:27.579901 | orchestrator | 2025-06-02 14:15:27 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:27.581917 | orchestrator | 2025-06-02 14:15:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:30.612359 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:30.615397 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:30.615970 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:30.617671 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:30.618413 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:30.620528 | orchestrator | 2025-06-02 14:15:30 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:30.622797 | orchestrator | 2025-06-02 14:15:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:33.667548 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:33.667944 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:33.668590 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:33.669272 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:33.670445 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:33.674773 | orchestrator | 2025-06-02 14:15:33 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:33.675866 | orchestrator | 2025-06-02 14:15:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:36.718086 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:36.719229 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:36.722364 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:36.723086 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:36.723896 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:36.724577 | orchestrator | 2025-06-02 14:15:36 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:36.726000 | orchestrator | 2025-06-02 14:15:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:39.785495 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:39.786253 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:39.787005 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:39.788014 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:39.788989 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:39.789735 | orchestrator | 2025-06-02 14:15:39 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:39.789754 | orchestrator | 2025-06-02 14:15:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:42.846782 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:42.847580 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:42.850985 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:42.853386 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:42.856051 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:42.858338 | orchestrator | 2025-06-02 14:15:42 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:42.859361 | orchestrator | 2025-06-02 14:15:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:45.903217 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:45.903618 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:45.905676 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:45.910614 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state STARTED 2025-06-02 14:15:45.913998 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:45.915354 | orchestrator | 2025-06-02 14:15:45 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:45.915607 | orchestrator | 2025-06-02 14:15:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:48.958598 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:48.962224 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:15:48.963211 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:48.964098 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:48.964824 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task 8feeea50-93ed-4a04-8785-b59823d58235 is in state SUCCESS 2025-06-02 14:15:48.966266 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:48.973876 | orchestrator | 2025-06-02 14:15:48 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:48.973916 | orchestrator | 2025-06-02 14:15:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:52.018557 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:52.018960 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:15:52.019618 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:52.020083 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:52.020809 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:52.021474 | orchestrator | 2025-06-02 14:15:52 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:52.021535 | orchestrator | 2025-06-02 14:15:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:55.044101 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:55.044957 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:15:55.046213 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:55.047072 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:55.048493 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:55.049925 | orchestrator | 2025-06-02 14:15:55 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:55.050142 | orchestrator | 2025-06-02 14:15:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:15:58.085932 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:15:58.087244 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:15:58.087642 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:15:58.088221 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:15:58.089557 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state STARTED 2025-06-02 14:15:58.090075 | orchestrator | 2025-06-02 14:15:58 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:15:58.090101 | orchestrator | 2025-06-02 14:15:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:01.121825 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:01.125325 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:01.128029 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:01.130124 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:01.132875 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task 8b261adc-b9c5-4c89-bb3e-c5e9e1f7a4a2 is in state SUCCESS 2025-06-02 14:16:01.135344 | orchestrator | 2025-06-02 14:16:01.135401 | orchestrator | 2025-06-02 14:16:01.135420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:16:01.135435 | orchestrator | 2025-06-02 14:16:01.135450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:16:01.135464 | orchestrator | Monday 02 June 2025 14:15:29 +0000 (0:00:00.517) 0:00:00.517 *********** 2025-06-02 14:16:01.135479 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:16:01.135494 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:16:01.135509 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:16:01.135523 | orchestrator | 2025-06-02 14:16:01.135538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:16:01.135552 | orchestrator | Monday 02 June 2025 14:15:29 +0000 (0:00:00.593) 0:00:01.110 *********** 2025-06-02 14:16:01.135567 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 14:16:01.135583 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 14:16:01.135645 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 14:16:01.135661 | orchestrator | 2025-06-02 14:16:01.135674 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 14:16:01.135688 | orchestrator | 2025-06-02 14:16:01.135701 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 14:16:01.135807 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.730) 0:00:01.840 *********** 2025-06-02 14:16:01.135828 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:16:01.135882 | orchestrator | 2025-06-02 14:16:01.135895 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 14:16:01.135908 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.854) 0:00:02.695 *********** 2025-06-02 14:16:01.135921 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 14:16:01.135935 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 14:16:01.135949 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 14:16:01.135962 | orchestrator | 2025-06-02 14:16:01.135975 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 14:16:01.136004 | orchestrator | Monday 02 June 2025 14:15:32 +0000 (0:00:01.038) 0:00:03.733 *********** 2025-06-02 14:16:01.136019 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 14:16:01.136033 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 14:16:01.136046 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 14:16:01.136059 | orchestrator | 2025-06-02 14:16:01.136073 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 14:16:01.136087 | orchestrator | Monday 02 June 2025 14:15:35 +0000 (0:00:03.148) 0:00:06.882 *********** 2025-06-02 14:16:01.136100 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:01.136114 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:01.136128 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:01.136141 | orchestrator | 2025-06-02 14:16:01.136154 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 14:16:01.136168 | orchestrator | Monday 02 June 2025 14:15:38 +0000 (0:00:02.813) 0:00:09.696 *********** 2025-06-02 14:16:01.136181 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:01.136195 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:01.136209 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:01.136223 | orchestrator | 2025-06-02 14:16:01.136237 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:16:01.136253 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.136287 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.136303 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.136317 | orchestrator | 2025-06-02 14:16:01.136332 | orchestrator | 2025-06-02 14:16:01.136345 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:16:01.136359 | orchestrator | Monday 02 June 2025 14:15:46 +0000 (0:00:08.249) 0:00:17.946 *********** 2025-06-02 14:16:01.136369 | orchestrator | =============================================================================== 2025-06-02 14:16:01.136378 | orchestrator | memcached : Restart memcached container --------------------------------- 8.25s 2025-06-02 14:16:01.136389 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.15s 2025-06-02 14:16:01.136400 | orchestrator | memcached : Check memcached container ----------------------------------- 2.81s 2025-06-02 14:16:01.136410 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.04s 2025-06-02 14:16:01.136420 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-06-02 14:16:01.136431 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-06-02 14:16:01.136442 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2025-06-02 14:16:01.136453 | orchestrator | 2025-06-02 14:16:01.136699 | orchestrator | 2025-06-02 14:16:01.136724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:16:01.136735 | orchestrator | 2025-06-02 14:16:01.136745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:16:01.136755 | orchestrator | Monday 02 June 2025 14:15:29 +0000 (0:00:00.401) 0:00:00.401 *********** 2025-06-02 14:16:01.136767 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:16:01.136779 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:16:01.136790 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:16:01.136800 | orchestrator | 2025-06-02 14:16:01.136811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:16:01.136821 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.615) 0:00:01.016 *********** 2025-06-02 14:16:01.136854 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 14:16:01.136868 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 14:16:01.136879 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 14:16:01.136891 | orchestrator | 2025-06-02 14:16:01.136903 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 14:16:01.136916 | orchestrator | 2025-06-02 14:16:01.136927 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 14:16:01.136939 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.599) 0:00:01.616 *********** 2025-06-02 14:16:01.136950 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:16:01.136961 | orchestrator | 2025-06-02 14:16:01.136973 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 14:16:01.136985 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.914) 0:00:02.530 *********** 2025-06-02 14:16:01.136999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137125 | orchestrator | 2025-06-02 14:16:01.137132 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 14:16:01.137138 | orchestrator | Monday 02 June 2025 14:15:33 +0000 (0:00:01.566) 0:00:04.097 *********** 2025-06-02 14:16:01.137145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137203 | orchestrator | 2025-06-02 14:16:01.137210 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 14:16:01.137217 | orchestrator | Monday 02 June 2025 14:15:36 +0000 (0:00:03.762) 0:00:07.859 *********** 2025-06-02 14:16:01.137224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137282 | orchestrator | 2025-06-02 14:16:01.137289 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 14:16:01.137296 | orchestrator | Monday 02 June 2025 14:15:40 +0000 (0:00:03.370) 0:00:11.229 *********** 2025-06-02 14:16:01.137303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 14:16:01.137357 | orchestrator | 2025-06-02 14:16:01.137364 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 14:16:01.137371 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:02.101) 0:00:13.331 *********** 2025-06-02 14:16:01.137378 | orchestrator | 2025-06-02 14:16:01.137385 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 14:16:01.137391 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:00.172) 0:00:13.503 *********** 2025-06-02 14:16:01.137398 | orchestrator | 2025-06-02 14:16:01.137405 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 14:16:01.137416 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:00.089) 0:00:13.592 *********** 2025-06-02 14:16:01.137424 | orchestrator | 2025-06-02 14:16:01.137435 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 14:16:01.137446 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:00.120) 0:00:13.713 *********** 2025-06-02 14:16:01.137458 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:01.137471 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:01.137483 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:01.137496 | orchestrator | 2025-06-02 14:16:01.137507 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 14:16:01.137518 | orchestrator | Monday 02 June 2025 14:15:51 +0000 (0:00:09.032) 0:00:22.745 *********** 2025-06-02 14:16:01.137530 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:01.137541 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:01.137552 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:01.137564 | orchestrator | 2025-06-02 14:16:01.137576 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:16:01.137588 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.137601 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.137611 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:16:01.137622 | orchestrator | 2025-06-02 14:16:01.137634 | orchestrator | 2025-06-02 14:16:01.137646 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:16:01.137658 | orchestrator | Monday 02 June 2025 14:16:00 +0000 (0:00:08.837) 0:00:31.583 *********** 2025-06-02 14:16:01.137675 | orchestrator | =============================================================================== 2025-06-02 14:16:01.137688 | orchestrator | redis : Restart redis container ----------------------------------------- 9.03s 2025-06-02 14:16:01.137700 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.84s 2025-06-02 14:16:01.137710 | orchestrator | redis : Copying over default config.json files -------------------------- 3.76s 2025-06-02 14:16:01.137717 | orchestrator | redis : Copying over redis config files --------------------------------- 3.37s 2025-06-02 14:16:01.137724 | orchestrator | redis : Check redis containers ------------------------------------------ 2.10s 2025-06-02 14:16:01.137730 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.57s 2025-06-02 14:16:01.137737 | orchestrator | redis : include_tasks --------------------------------------------------- 0.91s 2025-06-02 14:16:01.137744 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2025-06-02 14:16:01.137750 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-06-02 14:16:01.137757 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.38s 2025-06-02 14:16:01.137861 | orchestrator | 2025-06-02 14:16:01 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:01.137872 | orchestrator | 2025-06-02 14:16:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:04.174761 | orchestrator | 2025-06-02 14:16:04 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:04.179808 | orchestrator | 2025-06-02 14:16:04 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:04.181664 | orchestrator | 2025-06-02 14:16:04 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:04.184013 | orchestrator | 2025-06-02 14:16:04 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:04.186887 | orchestrator | 2025-06-02 14:16:04 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:04.186920 | orchestrator | 2025-06-02 14:16:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:07.220259 | orchestrator | 2025-06-02 14:16:07 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:07.222909 | orchestrator | 2025-06-02 14:16:07 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:07.222945 | orchestrator | 2025-06-02 14:16:07 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:07.222958 | orchestrator | 2025-06-02 14:16:07 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:07.222970 | orchestrator | 2025-06-02 14:16:07 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:07.222981 | orchestrator | 2025-06-02 14:16:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:10.250910 | orchestrator | 2025-06-02 14:16:10 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:10.251269 | orchestrator | 2025-06-02 14:16:10 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:10.252514 | orchestrator | 2025-06-02 14:16:10 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:10.253433 | orchestrator | 2025-06-02 14:16:10 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:10.253938 | orchestrator | 2025-06-02 14:16:10 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:10.254007 | orchestrator | 2025-06-02 14:16:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:13.289791 | orchestrator | 2025-06-02 14:16:13 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:13.290095 | orchestrator | 2025-06-02 14:16:13 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:13.290652 | orchestrator | 2025-06-02 14:16:13 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:13.291352 | orchestrator | 2025-06-02 14:16:13 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:13.292031 | orchestrator | 2025-06-02 14:16:13 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:13.292054 | orchestrator | 2025-06-02 14:16:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:16.326417 | orchestrator | 2025-06-02 14:16:16 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:16.326742 | orchestrator | 2025-06-02 14:16:16 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:16.327402 | orchestrator | 2025-06-02 14:16:16 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:16.327937 | orchestrator | 2025-06-02 14:16:16 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:16.328787 | orchestrator | 2025-06-02 14:16:16 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:16.328867 | orchestrator | 2025-06-02 14:16:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:19.365677 | orchestrator | 2025-06-02 14:16:19 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:19.365776 | orchestrator | 2025-06-02 14:16:19 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:19.367677 | orchestrator | 2025-06-02 14:16:19 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:19.369810 | orchestrator | 2025-06-02 14:16:19 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:19.375698 | orchestrator | 2025-06-02 14:16:19 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:19.375782 | orchestrator | 2025-06-02 14:16:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:22.412258 | orchestrator | 2025-06-02 14:16:22 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:22.412773 | orchestrator | 2025-06-02 14:16:22 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:22.418305 | orchestrator | 2025-06-02 14:16:22 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:22.418347 | orchestrator | 2025-06-02 14:16:22 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:22.418359 | orchestrator | 2025-06-02 14:16:22 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:22.418371 | orchestrator | 2025-06-02 14:16:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:25.452549 | orchestrator | 2025-06-02 14:16:25 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:25.453025 | orchestrator | 2025-06-02 14:16:25 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:25.454271 | orchestrator | 2025-06-02 14:16:25 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:25.458344 | orchestrator | 2025-06-02 14:16:25 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:25.461439 | orchestrator | 2025-06-02 14:16:25 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:25.461533 | orchestrator | 2025-06-02 14:16:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:28.494672 | orchestrator | 2025-06-02 14:16:28 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:28.497036 | orchestrator | 2025-06-02 14:16:28 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:28.497807 | orchestrator | 2025-06-02 14:16:28 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:28.499957 | orchestrator | 2025-06-02 14:16:28 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:28.500904 | orchestrator | 2025-06-02 14:16:28 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:28.500985 | orchestrator | 2025-06-02 14:16:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:31.536313 | orchestrator | 2025-06-02 14:16:31 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:31.536808 | orchestrator | 2025-06-02 14:16:31 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:31.537341 | orchestrator | 2025-06-02 14:16:31 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:31.538267 | orchestrator | 2025-06-02 14:16:31 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:31.540524 | orchestrator | 2025-06-02 14:16:31 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:31.540621 | orchestrator | 2025-06-02 14:16:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:34.585405 | orchestrator | 2025-06-02 14:16:34 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:34.586324 | orchestrator | 2025-06-02 14:16:34 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:34.587647 | orchestrator | 2025-06-02 14:16:34 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:34.588807 | orchestrator | 2025-06-02 14:16:34 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state STARTED 2025-06-02 14:16:34.593811 | orchestrator | 2025-06-02 14:16:34 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:34.593875 | orchestrator | 2025-06-02 14:16:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:37.639445 | orchestrator | 2025-06-02 14:16:37 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:37.639575 | orchestrator | 2025-06-02 14:16:37 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:37.639602 | orchestrator | 2025-06-02 14:16:37 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:37.640476 | orchestrator | 2025-06-02 14:16:37 | INFO  | Task af87ad62-222b-44e5-a444-7021f1c57501 is in state SUCCESS 2025-06-02 14:16:37.642613 | orchestrator | 2025-06-02 14:16:37.642657 | orchestrator | 2025-06-02 14:16:37.642669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:16:37.642681 | orchestrator | 2025-06-02 14:16:37.642693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:16:37.642704 | orchestrator | Monday 02 June 2025 14:15:29 +0000 (0:00:00.746) 0:00:00.746 *********** 2025-06-02 14:16:37.642715 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:16:37.642727 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:16:37.642739 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:16:37.642749 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:16:37.642760 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:16:37.642771 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:16:37.642781 | orchestrator | 2025-06-02 14:16:37.642792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:16:37.642803 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.851) 0:00:01.598 *********** 2025-06-02 14:16:37.642815 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642862 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642881 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642899 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642918 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642936 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 14:16:37.642954 | orchestrator | 2025-06-02 14:16:37.642970 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 14:16:37.642982 | orchestrator | 2025-06-02 14:16:37.643001 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 14:16:37.643018 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:01.065) 0:00:02.664 *********** 2025-06-02 14:16:37.643037 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:16:37.643056 | orchestrator | 2025-06-02 14:16:37.643074 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 14:16:37.643093 | orchestrator | Monday 02 June 2025 14:15:33 +0000 (0:00:01.500) 0:00:04.164 *********** 2025-06-02 14:16:37.643113 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 14:16:37.643133 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 14:16:37.643176 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 14:16:37.643189 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 14:16:37.643201 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 14:16:37.643213 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 14:16:37.643226 | orchestrator | 2025-06-02 14:16:37.643238 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 14:16:37.643251 | orchestrator | Monday 02 June 2025 14:15:35 +0000 (0:00:02.293) 0:00:06.458 *********** 2025-06-02 14:16:37.643263 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 14:16:37.643276 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 14:16:37.643288 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 14:16:37.643300 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 14:16:37.643313 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 14:16:37.643327 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 14:16:37.643339 | orchestrator | 2025-06-02 14:16:37.643352 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 14:16:37.643364 | orchestrator | Monday 02 June 2025 14:15:38 +0000 (0:00:02.838) 0:00:09.296 *********** 2025-06-02 14:16:37.643374 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 14:16:37.643385 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:16:37.643396 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 14:16:37.643407 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:16:37.643417 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 14:16:37.643428 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:16:37.643438 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 14:16:37.643457 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:16:37.643468 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 14:16:37.643478 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:16:37.643489 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 14:16:37.643500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:16:37.643510 | orchestrator | 2025-06-02 14:16:37.643521 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 14:16:37.643531 | orchestrator | Monday 02 June 2025 14:15:40 +0000 (0:00:01.951) 0:00:11.248 *********** 2025-06-02 14:16:37.643542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:16:37.643553 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:16:37.643563 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:16:37.643574 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:16:37.643585 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:16:37.643595 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:16:37.643606 | orchestrator | 2025-06-02 14:16:37.643616 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 14:16:37.643627 | orchestrator | Monday 02 June 2025 14:15:41 +0000 (0:00:01.145) 0:00:12.393 *********** 2025-06-02 14:16:37.643658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643878 | orchestrator | 2025-06-02 14:16:37.643889 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 14:16:37.643900 | orchestrator | Monday 02 June 2025 14:15:43 +0000 (0:00:02.010) 0:00:14.403 *********** 2025-06-02 14:16:37.643912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.643995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644100 | orchestrator | 2025-06-02 14:16:37.644111 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 14:16:37.644122 | orchestrator | Monday 02 June 2025 14:15:47 +0000 (0:00:04.468) 0:00:18.872 *********** 2025-06-02 14:16:37.644133 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:16:37.644144 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:16:37.644155 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:16:37.644165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:16:37.644176 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:16:37.644187 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:16:37.644197 | orchestrator | 2025-06-02 14:16:37.644208 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 14:16:37.644219 | orchestrator | Monday 02 June 2025 14:15:49 +0000 (0:00:01.430) 0:00:20.303 *********** 2025-06-02 14:16:37.644230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 14:16:37.644407 | orchestrator | 2025-06-02 14:16:37.644418 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644429 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:03.875) 0:00:24.178 *********** 2025-06-02 14:16:37.644440 | orchestrator | 2025-06-02 14:16:37.644451 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644461 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.140) 0:00:24.318 *********** 2025-06-02 14:16:37.644472 | orchestrator | 2025-06-02 14:16:37.644483 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644494 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.100) 0:00:24.418 *********** 2025-06-02 14:16:37.644504 | orchestrator | 2025-06-02 14:16:37.644515 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644525 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.128) 0:00:24.547 *********** 2025-06-02 14:16:37.644536 | orchestrator | 2025-06-02 14:16:37.644547 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644557 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.124) 0:00:24.671 *********** 2025-06-02 14:16:37.644568 | orchestrator | 2025-06-02 14:16:37.644579 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 14:16:37.644589 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.232) 0:00:24.904 *********** 2025-06-02 14:16:37.644600 | orchestrator | 2025-06-02 14:16:37.644611 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 14:16:37.644621 | orchestrator | Monday 02 June 2025 14:15:54 +0000 (0:00:00.243) 0:00:25.147 *********** 2025-06-02 14:16:37.644632 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:37.644643 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:16:37.644653 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:37.644664 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:16:37.644675 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:16:37.644691 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:37.644702 | orchestrator | 2025-06-02 14:16:37.644713 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 14:16:37.644723 | orchestrator | Monday 02 June 2025 14:16:04 +0000 (0:00:10.660) 0:00:35.807 *********** 2025-06-02 14:16:37.644734 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:16:37.644745 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:16:37.644756 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:16:37.644767 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:16:37.644777 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:16:37.644788 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:16:37.644798 | orchestrator | 2025-06-02 14:16:37.644809 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 14:16:37.644819 | orchestrator | Monday 02 June 2025 14:16:07 +0000 (0:00:02.596) 0:00:38.403 *********** 2025-06-02 14:16:37.644883 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:37.644894 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:37.644905 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:16:37.644916 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:16:37.644927 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:16:37.644937 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:37.644948 | orchestrator | 2025-06-02 14:16:37.644959 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 14:16:37.644970 | orchestrator | Monday 02 June 2025 14:16:15 +0000 (0:00:07.572) 0:00:45.976 *********** 2025-06-02 14:16:37.644981 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 14:16:37.644992 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 14:16:37.645003 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 14:16:37.645014 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 14:16:37.645025 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 14:16:37.645042 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 14:16:37.645054 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 14:16:37.645065 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 14:16:37.645075 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 14:16:37.645086 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 14:16:37.645097 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 14:16:37.645108 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 14:16:37.645119 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645130 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645141 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645152 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645163 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645180 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 14:16:37.645191 | orchestrator | 2025-06-02 14:16:37.645202 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 14:16:37.645213 | orchestrator | Monday 02 June 2025 14:16:22 +0000 (0:00:07.534) 0:00:53.510 *********** 2025-06-02 14:16:37.645224 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 14:16:37.645235 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:16:37.645246 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 14:16:37.645257 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:16:37.645268 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 14:16:37.645278 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:16:37.645289 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 14:16:37.645300 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 14:16:37.645311 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 14:16:37.645322 | orchestrator | 2025-06-02 14:16:37.645333 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 14:16:37.645344 | orchestrator | Monday 02 June 2025 14:16:24 +0000 (0:00:02.335) 0:00:55.846 *********** 2025-06-02 14:16:37.645355 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 14:16:37.645365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:16:37.645376 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 14:16:37.645387 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:16:37.645398 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 14:16:37.645409 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:16:37.645420 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 14:16:37.645432 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 14:16:37.645451 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 14:16:37.645470 | orchestrator | 2025-06-02 14:16:37.645491 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 14:16:37.645503 | orchestrator | Monday 02 June 2025 14:16:28 +0000 (0:00:03.634) 0:00:59.480 *********** 2025-06-02 14:16:37.645513 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:16:37.645524 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:16:37.645535 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:16:37.645554 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:16:37.645572 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:16:37.645591 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:16:37.645608 | orchestrator | 2025-06-02 14:16:37.645628 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:16:37.645646 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 14:16:37.645664 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 14:16:37.645684 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 14:16:37.645704 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 14:16:37.645723 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 14:16:37.645752 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 14:16:37.645783 | orchestrator | 2025-06-02 14:16:37.645803 | orchestrator | 2025-06-02 14:16:37.645887 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:16:37.645905 | orchestrator | Monday 02 June 2025 14:16:37 +0000 (0:00:08.552) 0:01:08.032 *********** 2025-06-02 14:16:37.645916 | orchestrator | =============================================================================== 2025-06-02 14:16:37.645927 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.12s 2025-06-02 14:16:37.645938 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.66s 2025-06-02 14:16:37.645948 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.53s 2025-06-02 14:16:37.645959 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.47s 2025-06-02 14:16:37.645970 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.88s 2025-06-02 14:16:37.645981 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.63s 2025-06-02 14:16:37.645991 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.84s 2025-06-02 14:16:37.646002 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.60s 2025-06-02 14:16:37.646011 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.34s 2025-06-02 14:16:37.646069 | orchestrator | module-load : Load modules ---------------------------------------------- 2.29s 2025-06-02 14:16:37.646079 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.01s 2025-06-02 14:16:37.646089 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.95s 2025-06-02 14:16:37.646099 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.50s 2025-06-02 14:16:37.646108 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.43s 2025-06-02 14:16:37.646118 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.15s 2025-06-02 14:16:37.646127 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.07s 2025-06-02 14:16:37.646139 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.97s 2025-06-02 14:16:37.646157 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2025-06-02 14:16:37.646341 | orchestrator | 2025-06-02 14:16:37 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:37.646440 | orchestrator | 2025-06-02 14:16:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:40.700738 | orchestrator | 2025-06-02 14:16:40 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:40.701015 | orchestrator | 2025-06-02 14:16:40 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:40.701729 | orchestrator | 2025-06-02 14:16:40 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:40.704334 | orchestrator | 2025-06-02 14:16:40 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:40.704936 | orchestrator | 2025-06-02 14:16:40 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:40.705384 | orchestrator | 2025-06-02 14:16:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:43.743379 | orchestrator | 2025-06-02 14:16:43 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:43.743467 | orchestrator | 2025-06-02 14:16:43 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:43.743481 | orchestrator | 2025-06-02 14:16:43 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:43.743986 | orchestrator | 2025-06-02 14:16:43 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:43.744292 | orchestrator | 2025-06-02 14:16:43 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:43.744355 | orchestrator | 2025-06-02 14:16:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:46.783085 | orchestrator | 2025-06-02 14:16:46 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:46.783676 | orchestrator | 2025-06-02 14:16:46 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:46.785809 | orchestrator | 2025-06-02 14:16:46 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:46.786626 | orchestrator | 2025-06-02 14:16:46 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:46.787471 | orchestrator | 2025-06-02 14:16:46 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:46.787491 | orchestrator | 2025-06-02 14:16:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:49.841083 | orchestrator | 2025-06-02 14:16:49 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:49.846259 | orchestrator | 2025-06-02 14:16:49 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:49.847489 | orchestrator | 2025-06-02 14:16:49 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:49.848624 | orchestrator | 2025-06-02 14:16:49 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:49.851503 | orchestrator | 2025-06-02 14:16:49 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:49.851543 | orchestrator | 2025-06-02 14:16:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:52.894700 | orchestrator | 2025-06-02 14:16:52 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:52.894809 | orchestrator | 2025-06-02 14:16:52 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:52.898248 | orchestrator | 2025-06-02 14:16:52 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:52.901632 | orchestrator | 2025-06-02 14:16:52 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:52.902349 | orchestrator | 2025-06-02 14:16:52 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:52.902370 | orchestrator | 2025-06-02 14:16:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:55.947643 | orchestrator | 2025-06-02 14:16:55 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:55.947945 | orchestrator | 2025-06-02 14:16:55 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:55.948562 | orchestrator | 2025-06-02 14:16:55 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:55.949441 | orchestrator | 2025-06-02 14:16:55 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:55.950197 | orchestrator | 2025-06-02 14:16:55 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:55.950226 | orchestrator | 2025-06-02 14:16:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:16:58.987207 | orchestrator | 2025-06-02 14:16:58 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:16:58.988074 | orchestrator | 2025-06-02 14:16:58 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:16:58.989135 | orchestrator | 2025-06-02 14:16:58 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:16:58.990256 | orchestrator | 2025-06-02 14:16:58 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:16:58.991065 | orchestrator | 2025-06-02 14:16:58 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:16:58.991098 | orchestrator | 2025-06-02 14:16:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:02.049463 | orchestrator | 2025-06-02 14:17:02 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:02.052002 | orchestrator | 2025-06-02 14:17:02 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:02.053794 | orchestrator | 2025-06-02 14:17:02 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:02.053886 | orchestrator | 2025-06-02 14:17:02 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:02.053900 | orchestrator | 2025-06-02 14:17:02 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:02.053912 | orchestrator | 2025-06-02 14:17:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:05.085147 | orchestrator | 2025-06-02 14:17:05 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:05.085705 | orchestrator | 2025-06-02 14:17:05 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:05.087587 | orchestrator | 2025-06-02 14:17:05 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:05.088885 | orchestrator | 2025-06-02 14:17:05 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:05.090992 | orchestrator | 2025-06-02 14:17:05 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:05.091022 | orchestrator | 2025-06-02 14:17:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:08.132181 | orchestrator | 2025-06-02 14:17:08 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:08.133382 | orchestrator | 2025-06-02 14:17:08 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:08.134938 | orchestrator | 2025-06-02 14:17:08 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:08.136323 | orchestrator | 2025-06-02 14:17:08 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:08.137652 | orchestrator | 2025-06-02 14:17:08 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:08.137696 | orchestrator | 2025-06-02 14:17:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:11.188508 | orchestrator | 2025-06-02 14:17:11 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:11.189781 | orchestrator | 2025-06-02 14:17:11 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:11.192150 | orchestrator | 2025-06-02 14:17:11 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:11.193521 | orchestrator | 2025-06-02 14:17:11 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:11.194236 | orchestrator | 2025-06-02 14:17:11 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:11.194382 | orchestrator | 2025-06-02 14:17:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:14.226967 | orchestrator | 2025-06-02 14:17:14 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:14.230307 | orchestrator | 2025-06-02 14:17:14 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:14.231313 | orchestrator | 2025-06-02 14:17:14 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:14.233204 | orchestrator | 2025-06-02 14:17:14 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:14.233570 | orchestrator | 2025-06-02 14:17:14 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:14.236078 | orchestrator | 2025-06-02 14:17:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:17.271202 | orchestrator | 2025-06-02 14:17:17 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:17.273119 | orchestrator | 2025-06-02 14:17:17 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:17.277120 | orchestrator | 2025-06-02 14:17:17 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:17.280710 | orchestrator | 2025-06-02 14:17:17 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:17.285280 | orchestrator | 2025-06-02 14:17:17 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:17.285783 | orchestrator | 2025-06-02 14:17:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:20.339639 | orchestrator | 2025-06-02 14:17:20 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:20.342830 | orchestrator | 2025-06-02 14:17:20 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:20.346969 | orchestrator | 2025-06-02 14:17:20 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:20.350329 | orchestrator | 2025-06-02 14:17:20 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:20.352663 | orchestrator | 2025-06-02 14:17:20 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:20.354107 | orchestrator | 2025-06-02 14:17:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:23.396802 | orchestrator | 2025-06-02 14:17:23 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:23.397044 | orchestrator | 2025-06-02 14:17:23 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:23.398242 | orchestrator | 2025-06-02 14:17:23 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:23.401177 | orchestrator | 2025-06-02 14:17:23 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:23.402232 | orchestrator | 2025-06-02 14:17:23 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:23.402303 | orchestrator | 2025-06-02 14:17:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:26.443197 | orchestrator | 2025-06-02 14:17:26 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:26.445483 | orchestrator | 2025-06-02 14:17:26 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:26.446894 | orchestrator | 2025-06-02 14:17:26 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:26.448722 | orchestrator | 2025-06-02 14:17:26 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:26.450094 | orchestrator | 2025-06-02 14:17:26 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:26.450210 | orchestrator | 2025-06-02 14:17:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:29.500189 | orchestrator | 2025-06-02 14:17:29 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:29.500911 | orchestrator | 2025-06-02 14:17:29 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:29.501888 | orchestrator | 2025-06-02 14:17:29 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:29.503003 | orchestrator | 2025-06-02 14:17:29 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:29.504209 | orchestrator | 2025-06-02 14:17:29 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:29.504231 | orchestrator | 2025-06-02 14:17:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:32.547413 | orchestrator | 2025-06-02 14:17:32 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:32.548041 | orchestrator | 2025-06-02 14:17:32 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:32.550586 | orchestrator | 2025-06-02 14:17:32 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:32.550649 | orchestrator | 2025-06-02 14:17:32 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:32.553120 | orchestrator | 2025-06-02 14:17:32 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:32.553203 | orchestrator | 2025-06-02 14:17:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:35.595697 | orchestrator | 2025-06-02 14:17:35 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:35.598869 | orchestrator | 2025-06-02 14:17:35 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:35.599625 | orchestrator | 2025-06-02 14:17:35 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:35.600778 | orchestrator | 2025-06-02 14:17:35 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:35.601395 | orchestrator | 2025-06-02 14:17:35 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:35.601464 | orchestrator | 2025-06-02 14:17:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:38.642374 | orchestrator | 2025-06-02 14:17:38 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:38.643016 | orchestrator | 2025-06-02 14:17:38 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:38.643909 | orchestrator | 2025-06-02 14:17:38 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:38.644925 | orchestrator | 2025-06-02 14:17:38 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:38.645903 | orchestrator | 2025-06-02 14:17:38 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:38.646062 | orchestrator | 2025-06-02 14:17:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:41.679136 | orchestrator | 2025-06-02 14:17:41 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:41.680688 | orchestrator | 2025-06-02 14:17:41 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:41.683960 | orchestrator | 2025-06-02 14:17:41 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:41.687162 | orchestrator | 2025-06-02 14:17:41 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:41.691625 | orchestrator | 2025-06-02 14:17:41 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:41.692120 | orchestrator | 2025-06-02 14:17:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:44.727316 | orchestrator | 2025-06-02 14:17:44 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:44.727532 | orchestrator | 2025-06-02 14:17:44 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:44.728934 | orchestrator | 2025-06-02 14:17:44 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:44.730183 | orchestrator | 2025-06-02 14:17:44 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:44.732193 | orchestrator | 2025-06-02 14:17:44 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:44.732228 | orchestrator | 2025-06-02 14:17:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:47.779509 | orchestrator | 2025-06-02 14:17:47 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:47.779617 | orchestrator | 2025-06-02 14:17:47 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:47.780492 | orchestrator | 2025-06-02 14:17:47 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:47.784017 | orchestrator | 2025-06-02 14:17:47 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:47.785765 | orchestrator | 2025-06-02 14:17:47 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:47.785908 | orchestrator | 2025-06-02 14:17:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:50.842645 | orchestrator | 2025-06-02 14:17:50 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:50.842893 | orchestrator | 2025-06-02 14:17:50 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:50.843528 | orchestrator | 2025-06-02 14:17:50 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:50.844241 | orchestrator | 2025-06-02 14:17:50 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:50.844851 | orchestrator | 2025-06-02 14:17:50 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:50.844947 | orchestrator | 2025-06-02 14:17:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:53.881276 | orchestrator | 2025-06-02 14:17:53 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:53.881335 | orchestrator | 2025-06-02 14:17:53 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:53.881344 | orchestrator | 2025-06-02 14:17:53 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:53.881351 | orchestrator | 2025-06-02 14:17:53 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:53.881359 | orchestrator | 2025-06-02 14:17:53 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:53.881366 | orchestrator | 2025-06-02 14:17:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:56.904676 | orchestrator | 2025-06-02 14:17:56 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:56.904919 | orchestrator | 2025-06-02 14:17:56 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:56.905648 | orchestrator | 2025-06-02 14:17:56 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state STARTED 2025-06-02 14:17:56.906272 | orchestrator | 2025-06-02 14:17:56 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:56.906992 | orchestrator | 2025-06-02 14:17:56 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:56.907017 | orchestrator | 2025-06-02 14:17:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:17:59.944620 | orchestrator | 2025-06-02 14:17:59 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:17:59.944705 | orchestrator | 2025-06-02 14:17:59 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:17:59.944719 | orchestrator | 2025-06-02 14:17:59 | INFO  | Task bac5d68b-b036-4eed-a6ae-73f1db2df368 is in state SUCCESS 2025-06-02 14:17:59.945512 | orchestrator | 2025-06-02 14:17:59.945543 | orchestrator | 2025-06-02 14:17:59.945556 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 14:17:59.945567 | orchestrator | 2025-06-02 14:17:59.945578 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 14:17:59.945589 | orchestrator | Monday 02 June 2025 14:12:49 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-02 14:17:59.945601 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.945612 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.945623 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.945634 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.945645 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.945655 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.945666 | orchestrator | 2025-06-02 14:17:59.945677 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 14:17:59.945688 | orchestrator | Monday 02 June 2025 14:12:50 +0000 (0:00:00.794) 0:00:01.011 *********** 2025-06-02 14:17:59.945699 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.945711 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.945721 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.945732 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.945743 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.945754 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.945765 | orchestrator | 2025-06-02 14:17:59.945777 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 14:17:59.945822 | orchestrator | Monday 02 June 2025 14:12:50 +0000 (0:00:00.761) 0:00:01.772 *********** 2025-06-02 14:17:59.945834 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.945845 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.945856 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.945866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.945877 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.945888 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.945899 | orchestrator | 2025-06-02 14:17:59.945910 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 14:17:59.945921 | orchestrator | Monday 02 June 2025 14:12:51 +0000 (0:00:00.885) 0:00:02.657 *********** 2025-06-02 14:17:59.945931 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.945942 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.945953 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.945963 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.945974 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.945985 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.945996 | orchestrator | 2025-06-02 14:17:59.946007 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 14:17:59.946070 | orchestrator | Monday 02 June 2025 14:12:53 +0000 (0:00:02.056) 0:00:04.714 *********** 2025-06-02 14:17:59.946102 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.946113 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.946124 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.946134 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.946145 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.946156 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.946167 | orchestrator | 2025-06-02 14:17:59.946178 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 14:17:59.946189 | orchestrator | Monday 02 June 2025 14:12:55 +0000 (0:00:01.267) 0:00:05.982 *********** 2025-06-02 14:17:59.946200 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.946210 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.946221 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.946232 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.946243 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.946253 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.946264 | orchestrator | 2025-06-02 14:17:59.946275 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 14:17:59.946286 | orchestrator | Monday 02 June 2025 14:12:56 +0000 (0:00:01.076) 0:00:07.058 *********** 2025-06-02 14:17:59.946297 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.946307 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.946318 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.946329 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.946340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.946351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.946361 | orchestrator | 2025-06-02 14:17:59.946372 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 14:17:59.946383 | orchestrator | Monday 02 June 2025 14:12:56 +0000 (0:00:00.815) 0:00:07.874 *********** 2025-06-02 14:17:59.946394 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.946405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.946415 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.946426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.946448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.946459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.946470 | orchestrator | 2025-06-02 14:17:59.946481 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 14:17:59.946492 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:00.563) 0:00:08.437 *********** 2025-06-02 14:17:59.946503 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946514 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946525 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.946536 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946547 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.946569 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946579 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946590 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.946601 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946623 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.946645 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946656 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946667 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.946685 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 14:17:59.946696 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 14:17:59.946707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.946718 | orchestrator | 2025-06-02 14:17:59.946729 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 14:17:59.946740 | orchestrator | Monday 02 June 2025 14:12:58 +0000 (0:00:00.821) 0:00:09.258 *********** 2025-06-02 14:17:59.946751 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.946762 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.946822 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.946841 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.946853 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.946864 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.946874 | orchestrator | 2025-06-02 14:17:59.946885 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 14:17:59.946897 | orchestrator | Monday 02 June 2025 14:12:59 +0000 (0:00:01.444) 0:00:10.702 *********** 2025-06-02 14:17:59.946908 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.946919 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.946930 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.946940 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.946951 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.946961 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.946972 | orchestrator | 2025-06-02 14:17:59.946983 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 14:17:59.946994 | orchestrator | Monday 02 June 2025 14:13:00 +0000 (0:00:00.631) 0:00:11.333 *********** 2025-06-02 14:17:59.947004 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.947015 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.947026 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.947036 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.947047 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.947058 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.947068 | orchestrator | 2025-06-02 14:17:59.947079 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 14:17:59.947090 | orchestrator | Monday 02 June 2025 14:13:06 +0000 (0:00:06.307) 0:00:17.641 *********** 2025-06-02 14:17:59.947101 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.947111 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.947122 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.947132 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.947143 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.947154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.947165 | orchestrator | 2025-06-02 14:17:59.947176 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 14:17:59.947187 | orchestrator | Monday 02 June 2025 14:13:07 +0000 (0:00:01.004) 0:00:18.645 *********** 2025-06-02 14:17:59.947197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.947208 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.947218 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.947229 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.947240 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.947251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.947261 | orchestrator | 2025-06-02 14:17:59.947272 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 14:17:59.947284 | orchestrator | Monday 02 June 2025 14:13:09 +0000 (0:00:01.954) 0:00:20.600 *********** 2025-06-02 14:17:59.947295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.947305 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.947316 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.947327 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.947345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.947356 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.947367 | orchestrator | 2025-06-02 14:17:59.947378 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 14:17:59.947389 | orchestrator | Monday 02 June 2025 14:13:10 +0000 (0:00:00.950) 0:00:21.550 *********** 2025-06-02 14:17:59.947399 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 14:17:59.947416 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 14:17:59.947427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.947437 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 14:17:59.947448 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 14:17:59.947459 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.947470 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 14:17:59.947480 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 14:17:59.947491 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.947502 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 14:17:59.947512 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 14:17:59.947523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.947534 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 14:17:59.947545 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 14:17:59.947578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.947590 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 14:17:59.947601 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 14:17:59.947611 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.947622 | orchestrator | 2025-06-02 14:17:59.947633 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 14:17:59.947654 | orchestrator | Monday 02 June 2025 14:13:11 +0000 (0:00:01.284) 0:00:22.835 *********** 2025-06-02 14:17:59.947665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.947676 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.947687 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.947698 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.947708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.947719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.947730 | orchestrator | 2025-06-02 14:17:59.947741 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 14:17:59.947752 | orchestrator | 2025-06-02 14:17:59.947763 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 14:17:59.947774 | orchestrator | Monday 02 June 2025 14:13:13 +0000 (0:00:01.915) 0:00:24.751 *********** 2025-06-02 14:17:59.947842 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.947856 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.947866 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.947877 | orchestrator | 2025-06-02 14:17:59.947888 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 14:17:59.947899 | orchestrator | Monday 02 June 2025 14:13:15 +0000 (0:00:01.955) 0:00:26.706 *********** 2025-06-02 14:17:59.947910 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.947921 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.947932 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.947943 | orchestrator | 2025-06-02 14:17:59.947953 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 14:17:59.947964 | orchestrator | Monday 02 June 2025 14:13:16 +0000 (0:00:01.033) 0:00:27.739 *********** 2025-06-02 14:17:59.947975 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.947986 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.947996 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.948007 | orchestrator | 2025-06-02 14:17:59.948018 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 14:17:59.948040 | orchestrator | Monday 02 June 2025 14:13:17 +0000 (0:00:01.173) 0:00:28.912 *********** 2025-06-02 14:17:59.948050 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.948061 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.948072 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.948082 | orchestrator | 2025-06-02 14:17:59.948093 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 14:17:59.948104 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:00.699) 0:00:29.612 *********** 2025-06-02 14:17:59.948115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.948125 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948136 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948147 | orchestrator | 2025-06-02 14:17:59.948158 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 14:17:59.948168 | orchestrator | Monday 02 June 2025 14:13:19 +0000 (0:00:00.378) 0:00:29.990 *********** 2025-06-02 14:17:59.948179 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:17:59.948190 | orchestrator | 2025-06-02 14:17:59.948201 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 14:17:59.948212 | orchestrator | Monday 02 June 2025 14:13:20 +0000 (0:00:00.933) 0:00:30.924 *********** 2025-06-02 14:17:59.948237 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.948249 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.948259 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.948278 | orchestrator | 2025-06-02 14:17:59.948290 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 14:17:59.948299 | orchestrator | Monday 02 June 2025 14:13:23 +0000 (0:00:03.417) 0:00:34.341 *********** 2025-06-02 14:17:59.948309 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948318 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948328 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.948337 | orchestrator | 2025-06-02 14:17:59.948347 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 14:17:59.948356 | orchestrator | Monday 02 June 2025 14:13:24 +0000 (0:00:00.685) 0:00:35.027 *********** 2025-06-02 14:17:59.948366 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948375 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948385 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.948394 | orchestrator | 2025-06-02 14:17:59.948404 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 14:17:59.948413 | orchestrator | Monday 02 June 2025 14:13:24 +0000 (0:00:00.875) 0:00:35.902 *********** 2025-06-02 14:17:59.948423 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948442 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.948451 | orchestrator | 2025-06-02 14:17:59.948466 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 14:17:59.948476 | orchestrator | Monday 02 June 2025 14:13:26 +0000 (0:00:01.972) 0:00:37.875 *********** 2025-06-02 14:17:59.948485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.948495 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948505 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948514 | orchestrator | 2025-06-02 14:17:59.948523 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 14:17:59.948533 | orchestrator | Monday 02 June 2025 14:13:27 +0000 (0:00:00.311) 0:00:38.186 *********** 2025-06-02 14:17:59.948543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.948552 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948561 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948571 | orchestrator | 2025-06-02 14:17:59.948580 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 14:17:59.948590 | orchestrator | Monday 02 June 2025 14:13:27 +0000 (0:00:00.372) 0:00:38.559 *********** 2025-06-02 14:17:59.948600 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.948616 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.948625 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.948635 | orchestrator | 2025-06-02 14:17:59.948644 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 14:17:59.948654 | orchestrator | Monday 02 June 2025 14:13:29 +0000 (0:00:02.233) 0:00:40.793 *********** 2025-06-02 14:17:59.948670 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 14:17:59.948680 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 14:17:59.948690 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 14:17:59.948700 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 14:17:59.948710 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 14:17:59.948719 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 14:17:59.948729 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 14:17:59.948739 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 14:17:59.948748 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 14:17:59.948758 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 14:17:59.948767 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 14:17:59.948777 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 14:17:59.948801 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 14:17:59.948811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 14:17:59.948821 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 14:17:59.948831 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.948840 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.948850 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.948859 | orchestrator | 2025-06-02 14:17:59.948869 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 14:17:59.948879 | orchestrator | Monday 02 June 2025 14:14:25 +0000 (0:00:56.077) 0:01:36.870 *********** 2025-06-02 14:17:59.948888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.948898 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.948907 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.948917 | orchestrator | 2025-06-02 14:17:59.948926 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 14:17:59.948936 | orchestrator | Monday 02 June 2025 14:14:26 +0000 (0:00:00.440) 0:01:37.311 *********** 2025-06-02 14:17:59.948945 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.948955 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.948964 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.948980 | orchestrator | 2025-06-02 14:17:59.948990 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 14:17:59.948999 | orchestrator | Monday 02 June 2025 14:14:27 +0000 (0:00:01.067) 0:01:38.378 *********** 2025-06-02 14:17:59.949009 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.949018 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.949028 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.949037 | orchestrator | 2025-06-02 14:17:59.949047 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 14:17:59.949057 | orchestrator | Monday 02 June 2025 14:14:28 +0000 (0:00:01.324) 0:01:39.703 *********** 2025-06-02 14:17:59.949066 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.949076 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.949085 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.949095 | orchestrator | 2025-06-02 14:17:59.949104 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 14:17:59.949114 | orchestrator | Monday 02 June 2025 14:14:42 +0000 (0:00:14.137) 0:01:53.841 *********** 2025-06-02 14:17:59.949123 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.949133 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.949142 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.949152 | orchestrator | 2025-06-02 14:17:59.949161 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 14:17:59.949171 | orchestrator | Monday 02 June 2025 14:14:43 +0000 (0:00:00.840) 0:01:54.681 *********** 2025-06-02 14:17:59.949180 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.949190 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.949199 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.949209 | orchestrator | 2025-06-02 14:17:59.949218 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 14:17:59.949228 | orchestrator | Monday 02 June 2025 14:14:44 +0000 (0:00:00.810) 0:01:55.492 *********** 2025-06-02 14:17:59.949238 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.949247 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.949257 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.949267 | orchestrator | 2025-06-02 14:17:59.949281 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 14:17:59.949292 | orchestrator | Monday 02 June 2025 14:14:45 +0000 (0:00:00.737) 0:01:56.230 *********** 2025-06-02 14:17:59.949301 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.949311 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.949320 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.949330 | orchestrator | 2025-06-02 14:17:59.949340 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 14:17:59.949349 | orchestrator | Monday 02 June 2025 14:14:46 +0000 (0:00:01.314) 0:01:57.544 *********** 2025-06-02 14:17:59.949359 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.949368 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.949378 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.949387 | orchestrator | 2025-06-02 14:17:59.949397 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 14:17:59.949406 | orchestrator | Monday 02 June 2025 14:14:46 +0000 (0:00:00.350) 0:01:57.894 *********** 2025-06-02 14:17:59.949416 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.949426 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.949435 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.949445 | orchestrator | 2025-06-02 14:17:59.949454 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 14:17:59.950010 | orchestrator | Monday 02 June 2025 14:14:47 +0000 (0:00:00.646) 0:01:58.540 *********** 2025-06-02 14:17:59.950071 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.950082 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.950091 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.950101 | orchestrator | 2025-06-02 14:17:59.950111 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 14:17:59.950129 | orchestrator | Monday 02 June 2025 14:14:48 +0000 (0:00:00.600) 0:01:59.141 *********** 2025-06-02 14:17:59.950138 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.950148 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.950157 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.950167 | orchestrator | 2025-06-02 14:17:59.950177 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 14:17:59.950186 | orchestrator | Monday 02 June 2025 14:14:49 +0000 (0:00:01.062) 0:02:00.204 *********** 2025-06-02 14:17:59.950196 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:17:59.950205 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:17:59.950215 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:17:59.950225 | orchestrator | 2025-06-02 14:17:59.950234 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 14:17:59.950244 | orchestrator | Monday 02 June 2025 14:14:50 +0000 (0:00:00.758) 0:02:00.963 *********** 2025-06-02 14:17:59.950253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.950263 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.950272 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.950282 | orchestrator | 2025-06-02 14:17:59.950291 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 14:17:59.950301 | orchestrator | Monday 02 June 2025 14:14:50 +0000 (0:00:00.272) 0:02:01.235 *********** 2025-06-02 14:17:59.950310 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.950320 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.950329 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.950339 | orchestrator | 2025-06-02 14:17:59.950348 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 14:17:59.950358 | orchestrator | Monday 02 June 2025 14:14:50 +0000 (0:00:00.287) 0:02:01.522 *********** 2025-06-02 14:17:59.950367 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.950377 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.950386 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.950395 | orchestrator | 2025-06-02 14:17:59.950405 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 14:17:59.950415 | orchestrator | Monday 02 June 2025 14:14:51 +0000 (0:00:00.913) 0:02:02.436 *********** 2025-06-02 14:17:59.950424 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.950434 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.950443 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.950452 | orchestrator | 2025-06-02 14:17:59.950462 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 14:17:59.950472 | orchestrator | Monday 02 June 2025 14:14:52 +0000 (0:00:00.590) 0:02:03.026 *********** 2025-06-02 14:17:59.950486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 14:17:59.950496 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 14:17:59.950506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 14:17:59.950515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 14:17:59.950525 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 14:17:59.950535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 14:17:59.950544 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 14:17:59.950554 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 14:17:59.950564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 14:17:59.950573 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 14:17:59.950588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 14:17:59.950598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 14:17:59.950614 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 14:17:59.950624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 14:17:59.950634 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 14:17:59.950643 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 14:17:59.950653 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 14:17:59.950663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 14:17:59.950672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 14:17:59.950682 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 14:17:59.950691 | orchestrator | 2025-06-02 14:17:59.950701 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 14:17:59.950711 | orchestrator | 2025-06-02 14:17:59.950720 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 14:17:59.950730 | orchestrator | Monday 02 June 2025 14:14:55 +0000 (0:00:02.996) 0:02:06.022 *********** 2025-06-02 14:17:59.950740 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.950749 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.950759 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.950768 | orchestrator | 2025-06-02 14:17:59.950778 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 14:17:59.950802 | orchestrator | Monday 02 June 2025 14:14:55 +0000 (0:00:00.566) 0:02:06.589 *********** 2025-06-02 14:17:59.950812 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.950822 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.950831 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.950841 | orchestrator | 2025-06-02 14:17:59.950851 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 14:17:59.950861 | orchestrator | Monday 02 June 2025 14:14:56 +0000 (0:00:00.646) 0:02:07.236 *********** 2025-06-02 14:17:59.950870 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.950880 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.950889 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.950899 | orchestrator | 2025-06-02 14:17:59.950909 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 14:17:59.950918 | orchestrator | Monday 02 June 2025 14:14:56 +0000 (0:00:00.314) 0:02:07.550 *********** 2025-06-02 14:17:59.950928 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:17:59.950938 | orchestrator | 2025-06-02 14:17:59.950948 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 14:17:59.950958 | orchestrator | Monday 02 June 2025 14:14:57 +0000 (0:00:00.639) 0:02:08.190 *********** 2025-06-02 14:17:59.950967 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.950977 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.950987 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.950996 | orchestrator | 2025-06-02 14:17:59.951006 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 14:17:59.951016 | orchestrator | Monday 02 June 2025 14:14:57 +0000 (0:00:00.316) 0:02:08.506 *********** 2025-06-02 14:17:59.951025 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.951035 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.951044 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.951059 | orchestrator | 2025-06-02 14:17:59.951069 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 14:17:59.951079 | orchestrator | Monday 02 June 2025 14:14:57 +0000 (0:00:00.317) 0:02:08.823 *********** 2025-06-02 14:17:59.951088 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.951098 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.951108 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.951118 | orchestrator | 2025-06-02 14:17:59.951127 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 14:17:59.951137 | orchestrator | Monday 02 June 2025 14:14:58 +0000 (0:00:00.326) 0:02:09.149 *********** 2025-06-02 14:17:59.951147 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.951164 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.951174 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.951184 | orchestrator | 2025-06-02 14:17:59.951193 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 14:17:59.951203 | orchestrator | Monday 02 June 2025 14:14:59 +0000 (0:00:01.450) 0:02:10.600 *********** 2025-06-02 14:17:59.951213 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:17:59.951222 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:17:59.951232 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:17:59.951241 | orchestrator | 2025-06-02 14:17:59.951251 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 14:17:59.951260 | orchestrator | 2025-06-02 14:17:59.951270 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 14:17:59.951280 | orchestrator | Monday 02 June 2025 14:15:07 +0000 (0:00:08.271) 0:02:18.872 *********** 2025-06-02 14:17:59.951289 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.951299 | orchestrator | 2025-06-02 14:17:59.951309 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 14:17:59.951318 | orchestrator | Monday 02 June 2025 14:15:08 +0000 (0:00:00.731) 0:02:19.603 *********** 2025-06-02 14:17:59.951328 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951337 | orchestrator | 2025-06-02 14:17:59.951347 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 14:17:59.951357 | orchestrator | Monday 02 June 2025 14:15:09 +0000 (0:00:00.391) 0:02:19.995 *********** 2025-06-02 14:17:59.951367 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 14:17:59.951376 | orchestrator | 2025-06-02 14:17:59.951392 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 14:17:59.951402 | orchestrator | Monday 02 June 2025 14:15:10 +0000 (0:00:00.968) 0:02:20.963 *********** 2025-06-02 14:17:59.951412 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951422 | orchestrator | 2025-06-02 14:17:59.951431 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 14:17:59.951441 | orchestrator | Monday 02 June 2025 14:15:10 +0000 (0:00:00.863) 0:02:21.827 *********** 2025-06-02 14:17:59.951451 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951460 | orchestrator | 2025-06-02 14:17:59.951470 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 14:17:59.951480 | orchestrator | Monday 02 June 2025 14:15:11 +0000 (0:00:00.577) 0:02:22.404 *********** 2025-06-02 14:17:59.951489 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 14:17:59.951499 | orchestrator | 2025-06-02 14:17:59.951508 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 14:17:59.951518 | orchestrator | Monday 02 June 2025 14:15:12 +0000 (0:00:01.484) 0:02:23.889 *********** 2025-06-02 14:17:59.951528 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 14:17:59.951537 | orchestrator | 2025-06-02 14:17:59.951547 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 14:17:59.951557 | orchestrator | Monday 02 June 2025 14:15:13 +0000 (0:00:01.000) 0:02:24.889 *********** 2025-06-02 14:17:59.951567 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951576 | orchestrator | 2025-06-02 14:17:59.951591 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 14:17:59.951601 | orchestrator | Monday 02 June 2025 14:15:14 +0000 (0:00:00.431) 0:02:25.320 *********** 2025-06-02 14:17:59.951611 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951620 | orchestrator | 2025-06-02 14:17:59.951630 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 14:17:59.951640 | orchestrator | 2025-06-02 14:17:59.951649 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 14:17:59.951659 | orchestrator | Monday 02 June 2025 14:15:14 +0000 (0:00:00.423) 0:02:25.744 *********** 2025-06-02 14:17:59.951669 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.951678 | orchestrator | 2025-06-02 14:17:59.951688 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 14:17:59.951698 | orchestrator | Monday 02 June 2025 14:15:14 +0000 (0:00:00.147) 0:02:25.891 *********** 2025-06-02 14:17:59.951707 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 14:17:59.951717 | orchestrator | 2025-06-02 14:17:59.951727 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 14:17:59.951737 | orchestrator | Monday 02 June 2025 14:15:15 +0000 (0:00:00.208) 0:02:26.099 *********** 2025-06-02 14:17:59.951746 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.951756 | orchestrator | 2025-06-02 14:17:59.951766 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 14:17:59.951775 | orchestrator | Monday 02 June 2025 14:15:16 +0000 (0:00:01.186) 0:02:27.286 *********** 2025-06-02 14:17:59.951804 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.951814 | orchestrator | 2025-06-02 14:17:59.951824 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 14:17:59.951833 | orchestrator | Monday 02 June 2025 14:15:17 +0000 (0:00:01.493) 0:02:28.779 *********** 2025-06-02 14:17:59.951843 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951852 | orchestrator | 2025-06-02 14:17:59.951862 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 14:17:59.951871 | orchestrator | Monday 02 June 2025 14:15:18 +0000 (0:00:00.807) 0:02:29.587 *********** 2025-06-02 14:17:59.951881 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.951890 | orchestrator | 2025-06-02 14:17:59.951900 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 14:17:59.951910 | orchestrator | Monday 02 June 2025 14:15:19 +0000 (0:00:00.458) 0:02:30.045 *********** 2025-06-02 14:17:59.951919 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951929 | orchestrator | 2025-06-02 14:17:59.951939 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 14:17:59.951948 | orchestrator | Monday 02 June 2025 14:15:26 +0000 (0:00:07.285) 0:02:37.330 *********** 2025-06-02 14:17:59.951958 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.951967 | orchestrator | 2025-06-02 14:17:59.951977 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 14:17:59.951991 | orchestrator | Monday 02 June 2025 14:15:37 +0000 (0:00:11.197) 0:02:48.527 *********** 2025-06-02 14:17:59.952001 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.952011 | orchestrator | 2025-06-02 14:17:59.952020 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 14:17:59.952030 | orchestrator | 2025-06-02 14:17:59.952039 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 14:17:59.952049 | orchestrator | Monday 02 June 2025 14:15:38 +0000 (0:00:00.519) 0:02:49.047 *********** 2025-06-02 14:17:59.952059 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.952068 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.952078 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.952087 | orchestrator | 2025-06-02 14:17:59.952097 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 14:17:59.952107 | orchestrator | Monday 02 June 2025 14:15:38 +0000 (0:00:00.477) 0:02:49.525 *********** 2025-06-02 14:17:59.952123 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952133 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.952143 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.952152 | orchestrator | 2025-06-02 14:17:59.952161 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 14:17:59.952171 | orchestrator | Monday 02 June 2025 14:15:39 +0000 (0:00:00.453) 0:02:49.979 *********** 2025-06-02 14:17:59.952181 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:17:59.952190 | orchestrator | 2025-06-02 14:17:59.952200 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 14:17:59.952216 | orchestrator | Monday 02 June 2025 14:15:39 +0000 (0:00:00.569) 0:02:50.548 *********** 2025-06-02 14:17:59.952226 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952235 | orchestrator | 2025-06-02 14:17:59.952245 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 14:17:59.952255 | orchestrator | Monday 02 June 2025 14:15:40 +0000 (0:00:00.990) 0:02:51.539 *********** 2025-06-02 14:17:59.952264 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952274 | orchestrator | 2025-06-02 14:17:59.952283 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 14:17:59.952293 | orchestrator | Monday 02 June 2025 14:15:41 +0000 (0:00:00.976) 0:02:52.516 *********** 2025-06-02 14:17:59.952303 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952312 | orchestrator | 2025-06-02 14:17:59.952322 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 14:17:59.952331 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:00.771) 0:02:53.287 *********** 2025-06-02 14:17:59.952341 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952350 | orchestrator | 2025-06-02 14:17:59.952360 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 14:17:59.952370 | orchestrator | Monday 02 June 2025 14:15:43 +0000 (0:00:01.193) 0:02:54.481 *********** 2025-06-02 14:17:59.952379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952389 | orchestrator | 2025-06-02 14:17:59.952398 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 14:17:59.952408 | orchestrator | Monday 02 June 2025 14:15:43 +0000 (0:00:00.277) 0:02:54.759 *********** 2025-06-02 14:17:59.952417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952427 | orchestrator | 2025-06-02 14:17:59.952437 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 14:17:59.952447 | orchestrator | Monday 02 June 2025 14:15:44 +0000 (0:00:00.284) 0:02:55.043 *********** 2025-06-02 14:17:59.952456 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952466 | orchestrator | 2025-06-02 14:17:59.952476 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 14:17:59.952486 | orchestrator | Monday 02 June 2025 14:15:44 +0000 (0:00:00.242) 0:02:55.286 *********** 2025-06-02 14:17:59.952495 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952505 | orchestrator | 2025-06-02 14:17:59.952514 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 14:17:59.952524 | orchestrator | Monday 02 June 2025 14:15:44 +0000 (0:00:00.316) 0:02:55.603 *********** 2025-06-02 14:17:59.952534 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952543 | orchestrator | 2025-06-02 14:17:59.952553 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 14:17:59.952562 | orchestrator | Monday 02 June 2025 14:15:50 +0000 (0:00:05.528) 0:03:01.131 *********** 2025-06-02 14:17:59.952572 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 14:17:59.952581 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 14:17:59.952591 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 14:17:59.952607 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 14:17:59.952617 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 14:17:59.952627 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 14:17:59.952636 | orchestrator | 2025-06-02 14:17:59.952646 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 14:17:59.952656 | orchestrator | Monday 02 June 2025 14:17:30 +0000 (0:01:40.772) 0:04:41.904 *********** 2025-06-02 14:17:59.952665 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952675 | orchestrator | 2025-06-02 14:17:59.952684 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 14:17:59.952694 | orchestrator | Monday 02 June 2025 14:17:32 +0000 (0:00:01.379) 0:04:43.284 *********** 2025-06-02 14:17:59.952704 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952713 | orchestrator | 2025-06-02 14:17:59.952723 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 14:17:59.952736 | orchestrator | Monday 02 June 2025 14:17:34 +0000 (0:00:01.727) 0:04:45.012 *********** 2025-06-02 14:17:59.952746 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 14:17:59.952756 | orchestrator | 2025-06-02 14:17:59.952765 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 14:17:59.952775 | orchestrator | Monday 02 June 2025 14:17:35 +0000 (0:00:01.151) 0:04:46.163 *********** 2025-06-02 14:17:59.952833 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952843 | orchestrator | 2025-06-02 14:17:59.952853 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 14:17:59.952863 | orchestrator | Monday 02 June 2025 14:17:35 +0000 (0:00:00.240) 0:04:46.404 *********** 2025-06-02 14:17:59.952872 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 14:17:59.952882 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 14:17:59.952892 | orchestrator | 2025-06-02 14:17:59.952901 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 14:17:59.952911 | orchestrator | Monday 02 June 2025 14:17:37 +0000 (0:00:02.455) 0:04:48.859 *********** 2025-06-02 14:17:59.952920 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.952930 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.952939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.952949 | orchestrator | 2025-06-02 14:17:59.952959 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 14:17:59.952968 | orchestrator | Monday 02 June 2025 14:17:38 +0000 (0:00:00.567) 0:04:49.427 *********** 2025-06-02 14:17:59.952983 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.952993 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.953003 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.953012 | orchestrator | 2025-06-02 14:17:59.953022 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 14:17:59.953032 | orchestrator | 2025-06-02 14:17:59.953041 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 14:17:59.953051 | orchestrator | Monday 02 June 2025 14:17:39 +0000 (0:00:00.879) 0:04:50.306 *********** 2025-06-02 14:17:59.953061 | orchestrator | ok: [testbed-manager] 2025-06-02 14:17:59.953070 | orchestrator | 2025-06-02 14:17:59.953080 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 14:17:59.953090 | orchestrator | Monday 02 June 2025 14:17:39 +0000 (0:00:00.123) 0:04:50.430 *********** 2025-06-02 14:17:59.953099 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 14:17:59.953109 | orchestrator | 2025-06-02 14:17:59.953119 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 14:17:59.953128 | orchestrator | Monday 02 June 2025 14:17:40 +0000 (0:00:00.519) 0:04:50.949 *********** 2025-06-02 14:17:59.953145 | orchestrator | changed: [testbed-manager] 2025-06-02 14:17:59.953155 | orchestrator | 2025-06-02 14:17:59.953165 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 14:17:59.953174 | orchestrator | 2025-06-02 14:17:59.953184 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 14:17:59.953194 | orchestrator | Monday 02 June 2025 14:17:44 +0000 (0:00:04.875) 0:04:55.824 *********** 2025-06-02 14:17:59.953203 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:17:59.953213 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:17:59.953223 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:17:59.953232 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:17:59.953242 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:17:59.953252 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:17:59.953261 | orchestrator | 2025-06-02 14:17:59.953271 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 14:17:59.953281 | orchestrator | Monday 02 June 2025 14:17:45 +0000 (0:00:00.666) 0:04:56.491 *********** 2025-06-02 14:17:59.953291 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 14:17:59.953301 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 14:17:59.953310 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 14:17:59.953320 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 14:17:59.953330 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 14:17:59.953339 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 14:17:59.953349 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 14:17:59.953359 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 14:17:59.953368 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 14:17:59.953378 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 14:17:59.953388 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 14:17:59.953397 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 14:17:59.953407 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 14:17:59.953416 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 14:17:59.953426 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 14:17:59.953436 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 14:17:59.953449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 14:17:59.953459 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 14:17:59.953469 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 14:17:59.953479 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 14:17:59.953488 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 14:17:59.953498 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 14:17:59.953507 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 14:17:59.953517 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 14:17:59.953527 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 14:17:59.953542 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 14:17:59.953551 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 14:17:59.953561 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 14:17:59.953571 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 14:17:59.953586 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 14:17:59.953596 | orchestrator | 2025-06-02 14:17:59.953606 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 14:17:59.953616 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:13.030) 0:05:09.522 *********** 2025-06-02 14:17:59.953625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.953635 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.953645 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.953655 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.953665 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.953674 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.953684 | orchestrator | 2025-06-02 14:17:59.953694 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 14:17:59.953704 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.492) 0:05:10.014 *********** 2025-06-02 14:17:59.953713 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:17:59.953723 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:17:59.953733 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:17:59.953742 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:17:59.953752 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:17:59.953761 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:17:59.953771 | orchestrator | 2025-06-02 14:17:59.953794 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:17:59.953804 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:17:59.953815 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 14:17:59.953825 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 14:17:59.953835 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 14:17:59.953845 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 14:17:59.953855 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 14:17:59.953865 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 14:17:59.953874 | orchestrator | 2025-06-02 14:17:59.953884 | orchestrator | 2025-06-02 14:17:59.953894 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:17:59.953903 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.534) 0:05:10.548 *********** 2025-06-02 14:17:59.953913 | orchestrator | =============================================================================== 2025-06-02 14:17:59.953923 | orchestrator | k3s_server_post : Wait for Cilium resources --------------------------- 100.77s 2025-06-02 14:17:59.953932 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.08s 2025-06-02 14:17:59.953942 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.14s 2025-06-02 14:17:59.953958 | orchestrator | Manage labels ---------------------------------------------------------- 13.03s 2025-06-02 14:17:59.953968 | orchestrator | kubectl : Install required packages ------------------------------------ 11.20s 2025-06-02 14:17:59.953977 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.27s 2025-06-02 14:17:59.953987 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.29s 2025-06-02 14:17:59.953996 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.31s 2025-06-02 14:17:59.954010 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.53s 2025-06-02 14:17:59.954045 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.88s 2025-06-02 14:17:59.954055 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.42s 2025-06-02 14:17:59.954065 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.00s 2025-06-02 14:17:59.954075 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.46s 2025-06-02 14:17:59.954084 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.23s 2025-06-02 14:17:59.954094 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.06s 2025-06-02 14:17:59.954103 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.97s 2025-06-02 14:17:59.954113 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.96s 2025-06-02 14:17:59.954122 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.95s 2025-06-02 14:17:59.954132 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.92s 2025-06-02 14:17:59.954141 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.73s 2025-06-02 14:17:59.954157 | orchestrator | 2025-06-02 14:17:59 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:17:59.954167 | orchestrator | 2025-06-02 14:17:59 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:17:59.954177 | orchestrator | 2025-06-02 14:17:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:02.980469 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:02.984278 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task df06a2da-f0ea-477e-976f-484583aa694d is in state STARTED 2025-06-02 14:18:02.986043 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:18:02.991034 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task 6d2d1481-773f-42ba-a82f-05193e4442aa is in state STARTED 2025-06-02 14:18:02.992647 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:02.998059 | orchestrator | 2025-06-02 14:18:02 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:02.998250 | orchestrator | 2025-06-02 14:18:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:06.028153 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:06.030353 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task df06a2da-f0ea-477e-976f-484583aa694d is in state STARTED 2025-06-02 14:18:06.032419 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state STARTED 2025-06-02 14:18:06.033220 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task 6d2d1481-773f-42ba-a82f-05193e4442aa is in state SUCCESS 2025-06-02 14:18:06.033664 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:06.036866 | orchestrator | 2025-06-02 14:18:06 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:06.037876 | orchestrator | 2025-06-02 14:18:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:09.074006 | orchestrator | 2025-06-02 14:18:09 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:09.074213 | orchestrator | 2025-06-02 14:18:09 | INFO  | Task df06a2da-f0ea-477e-976f-484583aa694d is in state STARTED 2025-06-02 14:18:09.074244 | orchestrator | 2025-06-02 14:18:09 | INFO  | Task c5b8e93a-9de0-4856-bd9c-af7b500b32a8 is in state SUCCESS 2025-06-02 14:18:09.075281 | orchestrator | 2025-06-02 14:18:09.075311 | orchestrator | 2025-06-02 14:18:09.075323 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 14:18:09.075335 | orchestrator | 2025-06-02 14:18:09.075347 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 14:18:09.075358 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:00.135) 0:00:00.135 *********** 2025-06-02 14:18:09.075370 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 14:18:09.075381 | orchestrator | 2025-06-02 14:18:09.075393 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 14:18:09.075404 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:00.652) 0:00:00.787 *********** 2025-06-02 14:18:09.075415 | orchestrator | changed: [testbed-manager] 2025-06-02 14:18:09.075427 | orchestrator | 2025-06-02 14:18:09.075438 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 14:18:09.075449 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:01.066) 0:00:01.854 *********** 2025-06-02 14:18:09.075460 | orchestrator | changed: [testbed-manager] 2025-06-02 14:18:09.075471 | orchestrator | 2025-06-02 14:18:09.075498 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:18:09.075512 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:18:09.075525 | orchestrator | 2025-06-02 14:18:09.075536 | orchestrator | 2025-06-02 14:18:09.075547 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:18:09.075558 | orchestrator | Monday 02 June 2025 14:18:05 +0000 (0:00:00.480) 0:00:02.335 *********** 2025-06-02 14:18:09.075569 | orchestrator | =============================================================================== 2025-06-02 14:18:09.075580 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.07s 2025-06-02 14:18:09.075591 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2025-06-02 14:18:09.075602 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2025-06-02 14:18:09.075613 | orchestrator | 2025-06-02 14:18:09.075624 | orchestrator | 2025-06-02 14:18:09.075635 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 14:18:09.075646 | orchestrator | 2025-06-02 14:18:09.075657 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 14:18:09.075668 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.093) 0:00:00.093 *********** 2025-06-02 14:18:09.075679 | orchestrator | ok: [localhost] => { 2025-06-02 14:18:09.075691 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 14:18:09.075703 | orchestrator | } 2025-06-02 14:18:09.075714 | orchestrator | 2025-06-02 14:18:09.075725 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 14:18:09.075736 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.034) 0:00:00.128 *********** 2025-06-02 14:18:09.075748 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 14:18:09.075809 | orchestrator | ...ignoring 2025-06-02 14:18:09.075822 | orchestrator | 2025-06-02 14:18:09.075833 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 14:18:09.075844 | orchestrator | Monday 02 June 2025 14:15:57 +0000 (0:00:03.281) 0:00:03.410 *********** 2025-06-02 14:18:09.075855 | orchestrator | skipping: [localhost] 2025-06-02 14:18:09.075866 | orchestrator | 2025-06-02 14:18:09.075880 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 14:18:09.075898 | orchestrator | Monday 02 June 2025 14:15:57 +0000 (0:00:00.052) 0:00:03.462 *********** 2025-06-02 14:18:09.075917 | orchestrator | ok: [localhost] 2025-06-02 14:18:09.075930 | orchestrator | 2025-06-02 14:18:09.075947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:18:09.075967 | orchestrator | 2025-06-02 14:18:09.075984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:18:09.076005 | orchestrator | Monday 02 June 2025 14:15:57 +0000 (0:00:00.207) 0:00:03.670 *********** 2025-06-02 14:18:09.076024 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:18:09.076041 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:18:09.076054 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:18:09.076067 | orchestrator | 2025-06-02 14:18:09.076080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:18:09.076093 | orchestrator | Monday 02 June 2025 14:15:57 +0000 (0:00:00.313) 0:00:03.984 *********** 2025-06-02 14:18:09.076106 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 14:18:09.076119 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 14:18:09.076132 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 14:18:09.076146 | orchestrator | 2025-06-02 14:18:09.076160 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 14:18:09.076172 | orchestrator | 2025-06-02 14:18:09.076185 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 14:18:09.076199 | orchestrator | Monday 02 June 2025 14:15:58 +0000 (0:00:00.558) 0:00:04.543 *********** 2025-06-02 14:18:09.076212 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:18:09.076226 | orchestrator | 2025-06-02 14:18:09.076238 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 14:18:09.076249 | orchestrator | Monday 02 June 2025 14:15:58 +0000 (0:00:00.521) 0:00:05.064 *********** 2025-06-02 14:18:09.076260 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:18:09.076271 | orchestrator | 2025-06-02 14:18:09.076282 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 14:18:09.076292 | orchestrator | Monday 02 June 2025 14:15:59 +0000 (0:00:01.037) 0:00:06.102 *********** 2025-06-02 14:18:09.076304 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076315 | orchestrator | 2025-06-02 14:18:09.076342 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 14:18:09.076354 | orchestrator | Monday 02 June 2025 14:16:00 +0000 (0:00:00.342) 0:00:06.445 *********** 2025-06-02 14:18:09.076365 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076376 | orchestrator | 2025-06-02 14:18:09.076387 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 14:18:09.076398 | orchestrator | Monday 02 June 2025 14:16:00 +0000 (0:00:00.395) 0:00:06.841 *********** 2025-06-02 14:18:09.076409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076420 | orchestrator | 2025-06-02 14:18:09.076431 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 14:18:09.076441 | orchestrator | Monday 02 June 2025 14:16:00 +0000 (0:00:00.385) 0:00:07.226 *********** 2025-06-02 14:18:09.076452 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076463 | orchestrator | 2025-06-02 14:18:09.076474 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 14:18:09.076485 | orchestrator | Monday 02 June 2025 14:16:01 +0000 (0:00:00.595) 0:00:07.822 *********** 2025-06-02 14:18:09.076514 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:18:09.076525 | orchestrator | 2025-06-02 14:18:09.076536 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 14:18:09.076548 | orchestrator | Monday 02 June 2025 14:16:02 +0000 (0:00:00.995) 0:00:08.818 *********** 2025-06-02 14:18:09.076559 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:18:09.076570 | orchestrator | 2025-06-02 14:18:09.076581 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 14:18:09.076592 | orchestrator | Monday 02 June 2025 14:16:03 +0000 (0:00:00.882) 0:00:09.700 *********** 2025-06-02 14:18:09.076602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076613 | orchestrator | 2025-06-02 14:18:09.076624 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 14:18:09.076635 | orchestrator | Monday 02 June 2025 14:16:03 +0000 (0:00:00.420) 0:00:10.120 *********** 2025-06-02 14:18:09.076646 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.076657 | orchestrator | 2025-06-02 14:18:09.076668 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 14:18:09.076679 | orchestrator | Monday 02 June 2025 14:16:04 +0000 (0:00:00.385) 0:00:10.506 *********** 2025-06-02 14:18:09.076696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076769 | orchestrator | 2025-06-02 14:18:09.076837 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 14:18:09.076864 | orchestrator | Monday 02 June 2025 14:16:05 +0000 (0:00:01.275) 0:00:11.781 *********** 2025-06-02 14:18:09.076886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.076947 | orchestrator | 2025-06-02 14:18:09.076974 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 14:18:09.077003 | orchestrator | Monday 02 June 2025 14:16:07 +0000 (0:00:02.433) 0:00:14.215 *********** 2025-06-02 14:18:09.077021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 14:18:09.077037 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 14:18:09.077052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 14:18:09.077068 | orchestrator | 2025-06-02 14:18:09.077083 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 14:18:09.077098 | orchestrator | Monday 02 June 2025 14:16:10 +0000 (0:00:02.397) 0:00:16.612 *********** 2025-06-02 14:18:09.077116 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 14:18:09.077133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 14:18:09.077157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 14:18:09.077173 | orchestrator | 2025-06-02 14:18:09.077190 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 14:18:09.077207 | orchestrator | Monday 02 June 2025 14:16:12 +0000 (0:00:02.298) 0:00:18.910 *********** 2025-06-02 14:18:09.077224 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 14:18:09.077243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 14:18:09.077262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 14:18:09.077281 | orchestrator | 2025-06-02 14:18:09.077299 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 14:18:09.077311 | orchestrator | Monday 02 June 2025 14:16:13 +0000 (0:00:01.196) 0:00:20.107 *********** 2025-06-02 14:18:09.077322 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 14:18:09.077333 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 14:18:09.077343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 14:18:09.077355 | orchestrator | 2025-06-02 14:18:09.077366 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 14:18:09.077377 | orchestrator | Monday 02 June 2025 14:16:15 +0000 (0:00:01.638) 0:00:21.746 *********** 2025-06-02 14:18:09.077388 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 14:18:09.077399 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 14:18:09.077409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 14:18:09.077420 | orchestrator | 2025-06-02 14:18:09.077431 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 14:18:09.077442 | orchestrator | Monday 02 June 2025 14:16:17 +0000 (0:00:01.619) 0:00:23.365 *********** 2025-06-02 14:18:09.077453 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 14:18:09.077465 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 14:18:09.077475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 14:18:09.077486 | orchestrator | 2025-06-02 14:18:09.077497 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 14:18:09.077508 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:01.584) 0:00:24.950 *********** 2025-06-02 14:18:09.077519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.077540 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:18:09.077551 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:18:09.077561 | orchestrator | 2025-06-02 14:18:09.077572 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 14:18:09.077583 | orchestrator | Monday 02 June 2025 14:16:19 +0000 (0:00:00.447) 0:00:25.398 *********** 2025-06-02 14:18:09.077605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.077625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.077639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:18:09.077651 | orchestrator | 2025-06-02 14:18:09.077662 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 14:18:09.077673 | orchestrator | Monday 02 June 2025 14:16:20 +0000 (0:00:01.641) 0:00:27.039 *********** 2025-06-02 14:18:09.077684 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:18:09.077694 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:18:09.077705 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:18:09.077716 | orchestrator | 2025-06-02 14:18:09.077734 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 14:18:09.077745 | orchestrator | Monday 02 June 2025 14:16:21 +0000 (0:00:00.855) 0:00:27.895 *********** 2025-06-02 14:18:09.077756 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:18:09.077767 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:18:09.077804 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:18:09.077815 | orchestrator | 2025-06-02 14:18:09.077827 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 14:18:09.077838 | orchestrator | Monday 02 June 2025 14:16:30 +0000 (0:00:08.421) 0:00:36.317 *********** 2025-06-02 14:18:09.077848 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:18:09.077859 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:18:09.077870 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:18:09.077881 | orchestrator | 2025-06-02 14:18:09.077892 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 14:18:09.077903 | orchestrator | 2025-06-02 14:18:09.077914 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 14:18:09.077925 | orchestrator | Monday 02 June 2025 14:16:30 +0000 (0:00:00.874) 0:00:37.192 *********** 2025-06-02 14:18:09.077936 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:18:09.077947 | orchestrator | 2025-06-02 14:18:09.077958 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 14:18:09.077969 | orchestrator | Monday 02 June 2025 14:16:31 +0000 (0:00:00.693) 0:00:37.886 *********** 2025-06-02 14:18:09.077980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:18:09.077991 | orchestrator | 2025-06-02 14:18:09.078001 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 14:18:09.078012 | orchestrator | Monday 02 June 2025 14:16:31 +0000 (0:00:00.278) 0:00:38.164 *********** 2025-06-02 14:18:09.078080 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:18:09.078092 | orchestrator | 2025-06-02 14:18:09.078103 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 14:18:09.078114 | orchestrator | Monday 02 June 2025 14:16:38 +0000 (0:00:06.744) 0:00:44.908 *********** 2025-06-02 14:18:09.078124 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:18:09.078135 | orchestrator | 2025-06-02 14:18:09.078146 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 14:18:09.078157 | orchestrator | 2025-06-02 14:18:09.078168 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 14:18:09.078187 | orchestrator | Monday 02 June 2025 14:17:27 +0000 (0:00:48.823) 0:01:33.732 *********** 2025-06-02 14:18:09.078199 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:18:09.078210 | orchestrator | 2025-06-02 14:18:09.078221 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 14:18:09.078232 | orchestrator | Monday 02 June 2025 14:17:28 +0000 (0:00:00.597) 0:01:34.330 *********** 2025-06-02 14:18:09.078243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:18:09.078254 | orchestrator | 2025-06-02 14:18:09.078265 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 14:18:09.078276 | orchestrator | Monday 02 June 2025 14:17:28 +0000 (0:00:00.424) 0:01:34.754 *********** 2025-06-02 14:18:09.078286 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:18:09.078298 | orchestrator | 2025-06-02 14:18:09.078308 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 14:18:09.078319 | orchestrator | Monday 02 June 2025 14:17:30 +0000 (0:00:01.733) 0:01:36.487 *********** 2025-06-02 14:18:09.078330 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:18:09.078341 | orchestrator | 2025-06-02 14:18:09.078352 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 14:18:09.078363 | orchestrator | 2025-06-02 14:18:09.078379 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 14:18:09.078391 | orchestrator | Monday 02 June 2025 14:17:44 +0000 (0:00:14.394) 0:01:50.882 *********** 2025-06-02 14:18:09.078402 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:18:09.078419 | orchestrator | 2025-06-02 14:18:09.078430 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 14:18:09.078441 | orchestrator | Monday 02 June 2025 14:17:45 +0000 (0:00:00.605) 0:01:51.488 *********** 2025-06-02 14:18:09.078452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:18:09.078463 | orchestrator | 2025-06-02 14:18:09.078474 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 14:18:09.078485 | orchestrator | Monday 02 June 2025 14:17:45 +0000 (0:00:00.249) 0:01:51.737 *********** 2025-06-02 14:18:09.078496 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:18:09.078507 | orchestrator | 2025-06-02 14:18:09.078518 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 14:18:09.078529 | orchestrator | Monday 02 June 2025 14:17:47 +0000 (0:00:01.865) 0:01:53.603 *********** 2025-06-02 14:18:09.078540 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:18:09.078551 | orchestrator | 2025-06-02 14:18:09.078562 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 14:18:09.078573 | orchestrator | 2025-06-02 14:18:09.078584 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 14:18:09.078594 | orchestrator | Monday 02 June 2025 14:18:02 +0000 (0:00:14.868) 0:02:08.471 *********** 2025-06-02 14:18:09.078605 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:18:09.078616 | orchestrator | 2025-06-02 14:18:09.078627 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 14:18:09.078638 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:01.498) 0:02:09.970 *********** 2025-06-02 14:18:09.078649 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 14:18:09.078660 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 14:18:09.078671 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 14:18:09.078682 | orchestrator | outward_rabbitmq_restart 2025-06-02 14:18:09.078693 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:18:09.078704 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:18:09.078715 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:18:09.078726 | orchestrator | 2025-06-02 14:18:09.078737 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 14:18:09.078748 | orchestrator | skipping: no hosts matched 2025-06-02 14:18:09.078758 | orchestrator | 2025-06-02 14:18:09.078769 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 14:18:09.078839 | orchestrator | skipping: no hosts matched 2025-06-02 14:18:09.078851 | orchestrator | 2025-06-02 14:18:09.078862 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 14:18:09.078873 | orchestrator | skipping: no hosts matched 2025-06-02 14:18:09.078883 | orchestrator | 2025-06-02 14:18:09.078894 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:18:09.078906 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 14:18:09.078918 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 14:18:09.078929 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:18:09.078940 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:18:09.078951 | orchestrator | 2025-06-02 14:18:09.078962 | orchestrator | 2025-06-02 14:18:09.078973 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:18:09.078985 | orchestrator | Monday 02 June 2025 14:18:06 +0000 (0:00:02.425) 0:02:12.395 *********** 2025-06-02 14:18:09.078996 | orchestrator | =============================================================================== 2025-06-02 14:18:09.079014 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.09s 2025-06-02 14:18:09.079026 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.34s 2025-06-02 14:18:09.079036 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.42s 2025-06-02 14:18:09.079048 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.28s 2025-06-02 14:18:09.079058 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.43s 2025-06-02 14:18:09.079076 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.43s 2025-06-02 14:18:09.079088 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.40s 2025-06-02 14:18:09.079099 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.30s 2025-06-02 14:18:09.079110 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2025-06-02 14:18:09.079121 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.64s 2025-06-02 14:18:09.079132 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.64s 2025-06-02 14:18:09.079143 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.62s 2025-06-02 14:18:09.079154 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.58s 2025-06-02 14:18:09.079165 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.50s 2025-06-02 14:18:09.079176 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.28s 2025-06-02 14:18:09.079186 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.20s 2025-06-02 14:18:09.079196 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2025-06-02 14:18:09.079206 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.00s 2025-06-02 14:18:09.079215 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.95s 2025-06-02 14:18:09.079225 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2025-06-02 14:18:09.079235 | orchestrator | 2025-06-02 14:18:09 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:09.079246 | orchestrator | 2025-06-02 14:18:09 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:09.079256 | orchestrator | 2025-06-02 14:18:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:12.107893 | orchestrator | 2025-06-02 14:18:12 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:12.108015 | orchestrator | 2025-06-02 14:18:12 | INFO  | Task df06a2da-f0ea-477e-976f-484583aa694d is in state SUCCESS 2025-06-02 14:18:12.108336 | orchestrator | 2025-06-02 14:18:12 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:12.109556 | orchestrator | 2025-06-02 14:18:12 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:12.109576 | orchestrator | 2025-06-02 14:18:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:15.156366 | orchestrator | 2025-06-02 14:18:15 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:15.156636 | orchestrator | 2025-06-02 14:18:15 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:15.157569 | orchestrator | 2025-06-02 14:18:15 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:15.157595 | orchestrator | 2025-06-02 14:18:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:18.206686 | orchestrator | 2025-06-02 14:18:18 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:18.207684 | orchestrator | 2025-06-02 14:18:18 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:18.208874 | orchestrator | 2025-06-02 14:18:18 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:18.209088 | orchestrator | 2025-06-02 14:18:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:21.253501 | orchestrator | 2025-06-02 14:18:21 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:21.255075 | orchestrator | 2025-06-02 14:18:21 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:21.257343 | orchestrator | 2025-06-02 14:18:21 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:21.257382 | orchestrator | 2025-06-02 14:18:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:24.313570 | orchestrator | 2025-06-02 14:18:24 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:24.313698 | orchestrator | 2025-06-02 14:18:24 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:24.315256 | orchestrator | 2025-06-02 14:18:24 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:24.315297 | orchestrator | 2025-06-02 14:18:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:27.366129 | orchestrator | 2025-06-02 14:18:27 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:27.368218 | orchestrator | 2025-06-02 14:18:27 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:27.370923 | orchestrator | 2025-06-02 14:18:27 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:27.371320 | orchestrator | 2025-06-02 14:18:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:30.417153 | orchestrator | 2025-06-02 14:18:30 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:30.417287 | orchestrator | 2025-06-02 14:18:30 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:30.418521 | orchestrator | 2025-06-02 14:18:30 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:30.418550 | orchestrator | 2025-06-02 14:18:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:33.461827 | orchestrator | 2025-06-02 14:18:33 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:33.464266 | orchestrator | 2025-06-02 14:18:33 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:33.464368 | orchestrator | 2025-06-02 14:18:33 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:33.464384 | orchestrator | 2025-06-02 14:18:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:36.508671 | orchestrator | 2025-06-02 14:18:36 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:36.508853 | orchestrator | 2025-06-02 14:18:36 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:36.508870 | orchestrator | 2025-06-02 14:18:36 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:36.508882 | orchestrator | 2025-06-02 14:18:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:39.544130 | orchestrator | 2025-06-02 14:18:39 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:39.544631 | orchestrator | 2025-06-02 14:18:39 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:39.545647 | orchestrator | 2025-06-02 14:18:39 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:39.545669 | orchestrator | 2025-06-02 14:18:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:42.593160 | orchestrator | 2025-06-02 14:18:42 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:42.594190 | orchestrator | 2025-06-02 14:18:42 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:42.595035 | orchestrator | 2025-06-02 14:18:42 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:42.595059 | orchestrator | 2025-06-02 14:18:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:45.634555 | orchestrator | 2025-06-02 14:18:45 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:45.634943 | orchestrator | 2025-06-02 14:18:45 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:45.635924 | orchestrator | 2025-06-02 14:18:45 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:45.635957 | orchestrator | 2025-06-02 14:18:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:48.698205 | orchestrator | 2025-06-02 14:18:48 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:48.702234 | orchestrator | 2025-06-02 14:18:48 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:48.704347 | orchestrator | 2025-06-02 14:18:48 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:48.704895 | orchestrator | 2025-06-02 14:18:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:51.750491 | orchestrator | 2025-06-02 14:18:51 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:51.750821 | orchestrator | 2025-06-02 14:18:51 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:51.754841 | orchestrator | 2025-06-02 14:18:51 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:51.754879 | orchestrator | 2025-06-02 14:18:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:54.798491 | orchestrator | 2025-06-02 14:18:54 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:54.800452 | orchestrator | 2025-06-02 14:18:54 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:54.802618 | orchestrator | 2025-06-02 14:18:54 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:54.802658 | orchestrator | 2025-06-02 14:18:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:18:57.850323 | orchestrator | 2025-06-02 14:18:57 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:18:57.851929 | orchestrator | 2025-06-02 14:18:57 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:18:57.853010 | orchestrator | 2025-06-02 14:18:57 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:18:57.853055 | orchestrator | 2025-06-02 14:18:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:00.901729 | orchestrator | 2025-06-02 14:19:00 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:00.904489 | orchestrator | 2025-06-02 14:19:00 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:00.906478 | orchestrator | 2025-06-02 14:19:00 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:19:00.907268 | orchestrator | 2025-06-02 14:19:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:03.945680 | orchestrator | 2025-06-02 14:19:03 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:03.949134 | orchestrator | 2025-06-02 14:19:03 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:03.952239 | orchestrator | 2025-06-02 14:19:03 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:19:03.952688 | orchestrator | 2025-06-02 14:19:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:06.991643 | orchestrator | 2025-06-02 14:19:06 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:06.991799 | orchestrator | 2025-06-02 14:19:06 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:06.992429 | orchestrator | 2025-06-02 14:19:06 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:19:06.992455 | orchestrator | 2025-06-02 14:19:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:10.034181 | orchestrator | 2025-06-02 14:19:10 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:10.035531 | orchestrator | 2025-06-02 14:19:10 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:10.035561 | orchestrator | 2025-06-02 14:19:10 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state STARTED 2025-06-02 14:19:10.035574 | orchestrator | 2025-06-02 14:19:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:13.077632 | orchestrator | 2025-06-02 14:19:13 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:13.078463 | orchestrator | 2025-06-02 14:19:13 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:13.081346 | orchestrator | 2025-06-02 14:19:13 | INFO  | Task 154447fd-66a8-4945-a0c7-d5f68c45be69 is in state SUCCESS 2025-06-02 14:19:13.082986 | orchestrator | 2025-06-02 14:19:13.083032 | orchestrator | 2025-06-02 14:19:13.083122 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 14:19:13.083135 | orchestrator | 2025-06-02 14:19:13.083146 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 14:19:13.083158 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:00.115) 0:00:00.115 *********** 2025-06-02 14:19:13.083169 | orchestrator | ok: [testbed-manager] 2025-06-02 14:19:13.083215 | orchestrator | 2025-06-02 14:19:13.083229 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 14:19:13.083240 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:00.398) 0:00:00.513 *********** 2025-06-02 14:19:13.083251 | orchestrator | ok: [testbed-manager] 2025-06-02 14:19:13.083262 | orchestrator | 2025-06-02 14:19:13.083274 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 14:19:13.083285 | orchestrator | Monday 02 June 2025 14:18:03 +0000 (0:00:00.416) 0:00:00.930 *********** 2025-06-02 14:19:13.083296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 14:19:13.083308 | orchestrator | 2025-06-02 14:19:13.083319 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 14:19:13.083330 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:00.611) 0:00:01.541 *********** 2025-06-02 14:19:13.083341 | orchestrator | changed: [testbed-manager] 2025-06-02 14:19:13.083352 | orchestrator | 2025-06-02 14:19:13.083363 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 14:19:13.083374 | orchestrator | Monday 02 June 2025 14:18:05 +0000 (0:00:01.180) 0:00:02.722 *********** 2025-06-02 14:19:13.083412 | orchestrator | changed: [testbed-manager] 2025-06-02 14:19:13.083423 | orchestrator | 2025-06-02 14:19:13.083434 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 14:19:13.083445 | orchestrator | Monday 02 June 2025 14:18:06 +0000 (0:00:00.692) 0:00:03.414 *********** 2025-06-02 14:19:13.083456 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 14:19:13.083467 | orchestrator | 2025-06-02 14:19:13.083478 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 14:19:13.083489 | orchestrator | Monday 02 June 2025 14:18:07 +0000 (0:00:01.549) 0:00:04.964 *********** 2025-06-02 14:19:13.083500 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 14:19:13.083511 | orchestrator | 2025-06-02 14:19:13.083522 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 14:19:13.083533 | orchestrator | Monday 02 June 2025 14:18:08 +0000 (0:00:00.775) 0:00:05.740 *********** 2025-06-02 14:19:13.083543 | orchestrator | ok: [testbed-manager] 2025-06-02 14:19:13.083554 | orchestrator | 2025-06-02 14:19:13.083565 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 14:19:13.083577 | orchestrator | Monday 02 June 2025 14:18:09 +0000 (0:00:00.398) 0:00:06.138 *********** 2025-06-02 14:19:13.083590 | orchestrator | ok: [testbed-manager] 2025-06-02 14:19:13.083603 | orchestrator | 2025-06-02 14:19:13.083630 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:19:13.083643 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:19:13.083658 | orchestrator | 2025-06-02 14:19:13.083671 | orchestrator | 2025-06-02 14:19:13.083684 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:19:13.083696 | orchestrator | Monday 02 June 2025 14:18:09 +0000 (0:00:00.297) 0:00:06.436 *********** 2025-06-02 14:19:13.083709 | orchestrator | =============================================================================== 2025-06-02 14:19:13.083722 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.55s 2025-06-02 14:19:13.083795 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.18s 2025-06-02 14:19:13.083810 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2025-06-02 14:19:13.083823 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.69s 2025-06-02 14:19:13.083836 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.61s 2025-06-02 14:19:13.083849 | orchestrator | Create .kube directory -------------------------------------------------- 0.42s 2025-06-02 14:19:13.083862 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2025-06-02 14:19:13.083875 | orchestrator | Get home directory of operator user ------------------------------------- 0.40s 2025-06-02 14:19:13.083888 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2025-06-02 14:19:13.083902 | orchestrator | 2025-06-02 14:19:13.083916 | orchestrator | 2025-06-02 14:19:13.083929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:19:13.083941 | orchestrator | 2025-06-02 14:19:13.083952 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:19:13.083963 | orchestrator | Monday 02 June 2025 14:16:41 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-02 14:19:13.083974 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.083985 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.083996 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.084006 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:19:13.084017 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:19:13.084028 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:19:13.084039 | orchestrator | 2025-06-02 14:19:13.084050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:19:13.084061 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:00.759) 0:00:00.921 *********** 2025-06-02 14:19:13.084080 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 14:19:13.084092 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 14:19:13.084103 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 14:19:13.084114 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 14:19:13.084125 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 14:19:13.084136 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 14:19:13.084147 | orchestrator | 2025-06-02 14:19:13.084173 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 14:19:13.084184 | orchestrator | 2025-06-02 14:19:13.084196 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 14:19:13.084207 | orchestrator | Monday 02 June 2025 14:16:43 +0000 (0:00:00.870) 0:00:01.792 *********** 2025-06-02 14:19:13.084219 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:19:13.084231 | orchestrator | 2025-06-02 14:19:13.084242 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 14:19:13.084253 | orchestrator | Monday 02 June 2025 14:16:44 +0000 (0:00:00.876) 0:00:02.668 *********** 2025-06-02 14:19:13.084267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084353 | orchestrator | 2025-06-02 14:19:13.084364 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 14:19:13.084376 | orchestrator | Monday 02 June 2025 14:16:45 +0000 (0:00:01.145) 0:00:03.814 *********** 2025-06-02 14:19:13.084387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084464 | orchestrator | 2025-06-02 14:19:13.084476 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 14:19:13.084487 | orchestrator | Monday 02 June 2025 14:16:47 +0000 (0:00:01.484) 0:00:05.298 *********** 2025-06-02 14:19:13.084503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084591 | orchestrator | 2025-06-02 14:19:13.084602 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 14:19:13.084613 | orchestrator | Monday 02 June 2025 14:16:48 +0000 (0:00:01.233) 0:00:06.532 *********** 2025-06-02 14:19:13.084625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084705 | orchestrator | 2025-06-02 14:19:13.084717 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 14:19:13.084728 | orchestrator | Monday 02 June 2025 14:16:49 +0000 (0:00:01.726) 0:00:08.258 *********** 2025-06-02 14:19:13.084774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.084851 | orchestrator | 2025-06-02 14:19:13.084862 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 14:19:13.084873 | orchestrator | Monday 02 June 2025 14:16:51 +0000 (0:00:01.760) 0:00:10.019 *********** 2025-06-02 14:19:13.084884 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.084901 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.084919 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.084930 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:19:13.084941 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:19:13.084952 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:19:13.084963 | orchestrator | 2025-06-02 14:19:13.084974 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 14:19:13.084985 | orchestrator | Monday 02 June 2025 14:16:54 +0000 (0:00:02.433) 0:00:12.453 *********** 2025-06-02 14:19:13.084996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 14:19:13.085007 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 14:19:13.085018 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 14:19:13.085029 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 14:19:13.085040 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 14:19:13.085051 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 14:19:13.085062 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085084 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085095 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085106 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085117 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 14:19:13.085128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085140 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085151 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085179 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085191 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 14:19:13.085202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085248 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 14:19:13.085270 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085300 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085322 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 14:19:13.085344 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085367 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085405 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 14:19:13.085416 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 14:19:13.085428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 14:19:13.085439 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 14:19:13.085450 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 14:19:13.085461 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 14:19:13.085472 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 14:19:13.085483 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 14:19:13.085495 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 14:19:13.085506 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 14:19:13.085517 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 14:19:13.085528 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 14:19:13.085540 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 14:19:13.085550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 14:19:13.085562 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 14:19:13.085573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 14:19:13.085591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 14:19:13.085602 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 14:19:13.085613 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 14:19:13.085631 | orchestrator | 2025-06-02 14:19:13.085643 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085654 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:18.324) 0:00:30.777 *********** 2025-06-02 14:19:13.085666 | orchestrator | 2025-06-02 14:19:13.085677 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085688 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.063) 0:00:30.841 *********** 2025-06-02 14:19:13.085699 | orchestrator | 2025-06-02 14:19:13.085710 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085721 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.062) 0:00:30.903 *********** 2025-06-02 14:19:13.085749 | orchestrator | 2025-06-02 14:19:13.085761 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085772 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.062) 0:00:30.966 *********** 2025-06-02 14:19:13.085783 | orchestrator | 2025-06-02 14:19:13.085794 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085805 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.075) 0:00:31.041 *********** 2025-06-02 14:19:13.085816 | orchestrator | 2025-06-02 14:19:13.085827 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 14:19:13.085838 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.064) 0:00:31.105 *********** 2025-06-02 14:19:13.085849 | orchestrator | 2025-06-02 14:19:13.085860 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 14:19:13.085871 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.066) 0:00:31.172 *********** 2025-06-02 14:19:13.085882 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.085893 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.085904 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:19:13.085915 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.085926 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:19:13.085937 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:19:13.085949 | orchestrator | 2025-06-02 14:19:13.085960 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 14:19:13.085971 | orchestrator | Monday 02 June 2025 14:17:14 +0000 (0:00:01.877) 0:00:33.049 *********** 2025-06-02 14:19:13.085982 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.085993 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:19:13.086004 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.086015 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:19:13.086083 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.086106 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:19:13.086117 | orchestrator | 2025-06-02 14:19:13.086128 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 14:19:13.086139 | orchestrator | 2025-06-02 14:19:13.086151 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 14:19:13.086161 | orchestrator | Monday 02 June 2025 14:17:52 +0000 (0:00:37.901) 0:01:10.951 *********** 2025-06-02 14:19:13.086172 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:19:13.086183 | orchestrator | 2025-06-02 14:19:13.086194 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 14:19:13.086205 | orchestrator | Monday 02 June 2025 14:17:53 +0000 (0:00:00.632) 0:01:11.583 *********** 2025-06-02 14:19:13.086216 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:19:13.086227 | orchestrator | 2025-06-02 14:19:13.086237 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 14:19:13.086248 | orchestrator | Monday 02 June 2025 14:17:54 +0000 (0:00:01.004) 0:01:12.588 *********** 2025-06-02 14:19:13.086259 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.086278 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.086290 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.086300 | orchestrator | 2025-06-02 14:19:13.086311 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 14:19:13.086322 | orchestrator | Monday 02 June 2025 14:17:55 +0000 (0:00:00.951) 0:01:13.539 *********** 2025-06-02 14:19:13.086333 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.086344 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.086354 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.086365 | orchestrator | 2025-06-02 14:19:13.086376 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 14:19:13.086387 | orchestrator | Monday 02 June 2025 14:17:55 +0000 (0:00:00.380) 0:01:13.920 *********** 2025-06-02 14:19:13.086397 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.086408 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.086418 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.086429 | orchestrator | 2025-06-02 14:19:13.086440 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 14:19:13.086451 | orchestrator | Monday 02 June 2025 14:17:55 +0000 (0:00:00.280) 0:01:14.201 *********** 2025-06-02 14:19:13.086461 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.086472 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.086483 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.086493 | orchestrator | 2025-06-02 14:19:13.086504 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 14:19:13.086515 | orchestrator | Monday 02 June 2025 14:17:56 +0000 (0:00:00.444) 0:01:14.646 *********** 2025-06-02 14:19:13.086526 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.086536 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.086547 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.086558 | orchestrator | 2025-06-02 14:19:13.086568 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 14:19:13.086579 | orchestrator | Monday 02 June 2025 14:17:56 +0000 (0:00:00.404) 0:01:15.050 *********** 2025-06-02 14:19:13.086598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086631 | orchestrator | 2025-06-02 14:19:13.086642 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 14:19:13.086653 | orchestrator | Monday 02 June 2025 14:17:57 +0000 (0:00:00.311) 0:01:15.362 *********** 2025-06-02 14:19:13.086664 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086675 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086696 | orchestrator | 2025-06-02 14:19:13.086707 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 14:19:13.086718 | orchestrator | Monday 02 June 2025 14:17:57 +0000 (0:00:00.285) 0:01:15.647 *********** 2025-06-02 14:19:13.086729 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086769 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086779 | orchestrator | 2025-06-02 14:19:13.086790 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 14:19:13.086801 | orchestrator | Monday 02 June 2025 14:17:57 +0000 (0:00:00.464) 0:01:16.112 *********** 2025-06-02 14:19:13.086812 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086823 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086833 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086844 | orchestrator | 2025-06-02 14:19:13.086855 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 14:19:13.086866 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:00.283) 0:01:16.396 *********** 2025-06-02 14:19:13.086876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086888 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086899 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086917 | orchestrator | 2025-06-02 14:19:13.086928 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 14:19:13.086939 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:00.257) 0:01:16.654 *********** 2025-06-02 14:19:13.086950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.086960 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.086971 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.086982 | orchestrator | 2025-06-02 14:19:13.086993 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 14:19:13.087004 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:00.234) 0:01:16.888 *********** 2025-06-02 14:19:13.087014 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087025 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087036 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087047 | orchestrator | 2025-06-02 14:19:13.087058 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 14:19:13.087070 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:00.364) 0:01:17.253 *********** 2025-06-02 14:19:13.087080 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087096 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087118 | orchestrator | 2025-06-02 14:19:13.087129 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 14:19:13.087140 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.240) 0:01:17.494 *********** 2025-06-02 14:19:13.087151 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087183 | orchestrator | 2025-06-02 14:19:13.087194 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 14:19:13.087205 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.282) 0:01:17.777 *********** 2025-06-02 14:19:13.087216 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087227 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087238 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087248 | orchestrator | 2025-06-02 14:19:13.087259 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 14:19:13.087270 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.426) 0:01:18.204 *********** 2025-06-02 14:19:13.087281 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087292 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087302 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087313 | orchestrator | 2025-06-02 14:19:13.087324 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 14:19:13.087335 | orchestrator | Monday 02 June 2025 14:18:00 +0000 (0:00:00.790) 0:01:18.994 *********** 2025-06-02 14:19:13.087346 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087357 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087367 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087378 | orchestrator | 2025-06-02 14:19:13.087389 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 14:19:13.087400 | orchestrator | Monday 02 June 2025 14:18:01 +0000 (0:00:00.561) 0:01:19.555 *********** 2025-06-02 14:19:13.087411 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:19:13.087422 | orchestrator | 2025-06-02 14:19:13.087433 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 14:19:13.087443 | orchestrator | Monday 02 June 2025 14:18:02 +0000 (0:00:01.516) 0:01:21.072 *********** 2025-06-02 14:19:13.087454 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.087465 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.087476 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.087486 | orchestrator | 2025-06-02 14:19:13.087497 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 14:19:13.087515 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:01.226) 0:01:22.298 *********** 2025-06-02 14:19:13.087526 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.087537 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.087548 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.087558 | orchestrator | 2025-06-02 14:19:13.087569 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 14:19:13.087580 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:00.385) 0:01:22.684 *********** 2025-06-02 14:19:13.087599 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087621 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087632 | orchestrator | 2025-06-02 14:19:13.087643 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 14:19:13.087654 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:00.332) 0:01:23.017 *********** 2025-06-02 14:19:13.087665 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087687 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087697 | orchestrator | 2025-06-02 14:19:13.087708 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 14:19:13.087719 | orchestrator | Monday 02 June 2025 14:18:05 +0000 (0:00:00.880) 0:01:23.897 *********** 2025-06-02 14:19:13.087730 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087776 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087787 | orchestrator | 2025-06-02 14:19:13.087798 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 14:19:13.087809 | orchestrator | Monday 02 June 2025 14:18:06 +0000 (0:00:00.713) 0:01:24.611 *********** 2025-06-02 14:19:13.087820 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087830 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087852 | orchestrator | 2025-06-02 14:19:13.087862 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 14:19:13.087873 | orchestrator | Monday 02 June 2025 14:18:06 +0000 (0:00:00.535) 0:01:25.146 *********** 2025-06-02 14:19:13.087884 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087895 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087905 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087916 | orchestrator | 2025-06-02 14:19:13.087927 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 14:19:13.087938 | orchestrator | Monday 02 June 2025 14:18:07 +0000 (0:00:00.275) 0:01:25.421 *********** 2025-06-02 14:19:13.087949 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.087959 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.087970 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.087981 | orchestrator | 2025-06-02 14:19:13.087992 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 14:19:13.088003 | orchestrator | Monday 02 June 2025 14:18:07 +0000 (0:00:00.286) 0:01:25.707 *********** 2025-06-02 14:19:13.088028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088151 | orchestrator | 2025-06-02 14:19:13.088162 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 14:19:13.088173 | orchestrator | Monday 02 June 2025 14:18:09 +0000 (0:00:01.768) 0:01:27.476 *********** 2025-06-02 14:19:13.088185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088306 | orchestrator | 2025-06-02 14:19:13.088317 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 14:19:13.088329 | orchestrator | Monday 02 June 2025 14:18:12 +0000 (0:00:03.614) 0:01:31.091 *********** 2025-06-02 14:19:13.088340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.088461 | orchestrator | 2025-06-02 14:19:13.088472 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.088483 | orchestrator | Monday 02 June 2025 14:18:14 +0000 (0:00:02.177) 0:01:33.269 *********** 2025-06-02 14:19:13.088494 | orchestrator | 2025-06-02 14:19:13.088505 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.088516 | orchestrator | Monday 02 June 2025 14:18:15 +0000 (0:00:00.066) 0:01:33.335 *********** 2025-06-02 14:19:13.088527 | orchestrator | 2025-06-02 14:19:13.088538 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.088549 | orchestrator | Monday 02 June 2025 14:18:15 +0000 (0:00:00.068) 0:01:33.404 *********** 2025-06-02 14:19:13.088560 | orchestrator | 2025-06-02 14:19:13.088570 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 14:19:13.088581 | orchestrator | Monday 02 June 2025 14:18:15 +0000 (0:00:00.105) 0:01:33.509 *********** 2025-06-02 14:19:13.088592 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.088603 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.088614 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.088633 | orchestrator | 2025-06-02 14:19:13.088644 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 14:19:13.088655 | orchestrator | Monday 02 June 2025 14:18:22 +0000 (0:00:07.147) 0:01:40.657 *********** 2025-06-02 14:19:13.088666 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.088677 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.088688 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.088699 | orchestrator | 2025-06-02 14:19:13.088710 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 14:19:13.088720 | orchestrator | Monday 02 June 2025 14:18:29 +0000 (0:00:07.611) 0:01:48.268 *********** 2025-06-02 14:19:13.088774 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.088787 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.088798 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.088809 | orchestrator | 2025-06-02 14:19:13.088820 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 14:19:13.088831 | orchestrator | Monday 02 June 2025 14:18:32 +0000 (0:00:02.428) 0:01:50.696 *********** 2025-06-02 14:19:13.088841 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.088852 | orchestrator | 2025-06-02 14:19:13.088863 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 14:19:13.088878 | orchestrator | Monday 02 June 2025 14:18:32 +0000 (0:00:00.134) 0:01:50.831 *********** 2025-06-02 14:19:13.088890 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.088900 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.088909 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.088919 | orchestrator | 2025-06-02 14:19:13.088929 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 14:19:13.088938 | orchestrator | Monday 02 June 2025 14:18:33 +0000 (0:00:00.769) 0:01:51.601 *********** 2025-06-02 14:19:13.088948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.088957 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.088967 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.088976 | orchestrator | 2025-06-02 14:19:13.088986 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 14:19:13.088996 | orchestrator | Monday 02 June 2025 14:18:34 +0000 (0:00:00.906) 0:01:52.507 *********** 2025-06-02 14:19:13.089005 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.089015 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.089025 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.089034 | orchestrator | 2025-06-02 14:19:13.089044 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 14:19:13.089054 | orchestrator | Monday 02 June 2025 14:18:34 +0000 (0:00:00.750) 0:01:53.257 *********** 2025-06-02 14:19:13.089064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.089086 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.089096 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.089106 | orchestrator | 2025-06-02 14:19:13.089126 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 14:19:13.089136 | orchestrator | Monday 02 June 2025 14:18:35 +0000 (0:00:00.582) 0:01:53.840 *********** 2025-06-02 14:19:13.089146 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.089155 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.089165 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.089174 | orchestrator | 2025-06-02 14:19:13.089184 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 14:19:13.089194 | orchestrator | Monday 02 June 2025 14:18:36 +0000 (0:00:00.653) 0:01:54.493 *********** 2025-06-02 14:19:13.089204 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.089213 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.089223 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.089232 | orchestrator | 2025-06-02 14:19:13.089242 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 14:19:13.089252 | orchestrator | Monday 02 June 2025 14:18:37 +0000 (0:00:01.241) 0:01:55.735 *********** 2025-06-02 14:19:13.089269 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.089279 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.089288 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.089298 | orchestrator | 2025-06-02 14:19:13.089307 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 14:19:13.089317 | orchestrator | Monday 02 June 2025 14:18:37 +0000 (0:00:00.299) 0:01:56.034 *********** 2025-06-02 14:19:13.089334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089355 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089365 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089376 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089407 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089486 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089514 | orchestrator | 2025-06-02 14:19:13.089524 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 14:19:13.089534 | orchestrator | Monday 02 June 2025 14:18:39 +0000 (0:00:01.330) 0:01:57.364 *********** 2025-06-02 14:19:13.089544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089619 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089657 | orchestrator | 2025-06-02 14:19:13.089667 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 14:19:13.089677 | orchestrator | Monday 02 June 2025 14:18:42 +0000 (0:00:03.515) 0:02:00.880 *********** 2025-06-02 14:19:13.089687 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089697 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089708 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:19:13.089807 | orchestrator | 2025-06-02 14:19:13.089817 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.089834 | orchestrator | Monday 02 June 2025 14:18:45 +0000 (0:00:02.701) 0:02:03.582 *********** 2025-06-02 14:19:13.089844 | orchestrator | 2025-06-02 14:19:13.089854 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.089864 | orchestrator | Monday 02 June 2025 14:18:45 +0000 (0:00:00.063) 0:02:03.646 *********** 2025-06-02 14:19:13.089874 | orchestrator | 2025-06-02 14:19:13.089884 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 14:19:13.089893 | orchestrator | Monday 02 June 2025 14:18:45 +0000 (0:00:00.064) 0:02:03.710 *********** 2025-06-02 14:19:13.089903 | orchestrator | 2025-06-02 14:19:13.089913 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 14:19:13.089922 | orchestrator | Monday 02 June 2025 14:18:45 +0000 (0:00:00.102) 0:02:03.813 *********** 2025-06-02 14:19:13.089932 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.089942 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.089952 | orchestrator | 2025-06-02 14:19:13.089961 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 14:19:13.089971 | orchestrator | Monday 02 June 2025 14:18:51 +0000 (0:00:06.136) 0:02:09.949 *********** 2025-06-02 14:19:13.089981 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.089990 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.090000 | orchestrator | 2025-06-02 14:19:13.090010 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 14:19:13.090060 | orchestrator | Monday 02 June 2025 14:18:58 +0000 (0:00:06.368) 0:02:16.318 *********** 2025-06-02 14:19:13.090071 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:19:13.090081 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:19:13.090091 | orchestrator | 2025-06-02 14:19:13.090101 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 14:19:13.090111 | orchestrator | Monday 02 June 2025 14:19:04 +0000 (0:00:06.191) 0:02:22.509 *********** 2025-06-02 14:19:13.090120 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:19:13.090130 | orchestrator | 2025-06-02 14:19:13.090140 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 14:19:13.090149 | orchestrator | Monday 02 June 2025 14:19:04 +0000 (0:00:00.123) 0:02:22.633 *********** 2025-06-02 14:19:13.090159 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.090169 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.090179 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.090188 | orchestrator | 2025-06-02 14:19:13.090205 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 14:19:13.090215 | orchestrator | Monday 02 June 2025 14:19:05 +0000 (0:00:01.181) 0:02:23.815 *********** 2025-06-02 14:19:13.090225 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.090234 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.090244 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.090254 | orchestrator | 2025-06-02 14:19:13.090263 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 14:19:13.090273 | orchestrator | Monday 02 June 2025 14:19:06 +0000 (0:00:00.758) 0:02:24.573 *********** 2025-06-02 14:19:13.090283 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.090293 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.090302 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.090312 | orchestrator | 2025-06-02 14:19:13.090322 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 14:19:13.090331 | orchestrator | Monday 02 June 2025 14:19:07 +0000 (0:00:00.833) 0:02:25.407 *********** 2025-06-02 14:19:13.090341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:19:13.090351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:19:13.090360 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:19:13.090370 | orchestrator | 2025-06-02 14:19:13.090380 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 14:19:13.090390 | orchestrator | Monday 02 June 2025 14:19:07 +0000 (0:00:00.746) 0:02:26.153 *********** 2025-06-02 14:19:13.090406 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.090417 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.090426 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.090436 | orchestrator | 2025-06-02 14:19:13.090446 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 14:19:13.090456 | orchestrator | Monday 02 June 2025 14:19:09 +0000 (0:00:01.334) 0:02:27.487 *********** 2025-06-02 14:19:13.090466 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:19:13.090476 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:19:13.090485 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:19:13.090495 | orchestrator | 2025-06-02 14:19:13.090505 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:19:13.090515 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 14:19:13.090525 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 14:19:13.090535 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 14:19:13.090550 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:19:13.090560 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:19:13.090570 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:19:13.090580 | orchestrator | 2025-06-02 14:19:13.090589 | orchestrator | 2025-06-02 14:19:13.090600 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:19:13.090609 | orchestrator | Monday 02 June 2025 14:19:10 +0000 (0:00:01.079) 0:02:28.567 *********** 2025-06-02 14:19:13.090619 | orchestrator | =============================================================================== 2025-06-02 14:19:13.090629 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.90s 2025-06-02 14:19:13.090639 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.32s 2025-06-02 14:19:13.090649 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.98s 2025-06-02 14:19:13.090658 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.28s 2025-06-02 14:19:13.090668 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.62s 2025-06-02 14:19:13.090678 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.61s 2025-06-02 14:19:13.090687 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.52s 2025-06-02 14:19:13.090697 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2025-06-02 14:19:13.090707 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2025-06-02 14:19:13.090716 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.18s 2025-06-02 14:19:13.090726 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.88s 2025-06-02 14:19:13.090754 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2025-06-02 14:19:13.090764 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.76s 2025-06-02 14:19:13.090773 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.73s 2025-06-02 14:19:13.090783 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.52s 2025-06-02 14:19:13.090793 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.48s 2025-06-02 14:19:13.090802 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.33s 2025-06-02 14:19:13.090819 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.33s 2025-06-02 14:19:13.090828 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.24s 2025-06-02 14:19:13.090843 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.23s 2025-06-02 14:19:13.090853 | orchestrator | 2025-06-02 14:19:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:16.122311 | orchestrator | 2025-06-02 14:19:16 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:16.123036 | orchestrator | 2025-06-02 14:19:16 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:16.123070 | orchestrator | 2025-06-02 14:19:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:19.162843 | orchestrator | 2025-06-02 14:19:19 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:19.162948 | orchestrator | 2025-06-02 14:19:19 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:19.162962 | orchestrator | 2025-06-02 14:19:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:22.205335 | orchestrator | 2025-06-02 14:19:22 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:22.207389 | orchestrator | 2025-06-02 14:19:22 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:22.207539 | orchestrator | 2025-06-02 14:19:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:25.242332 | orchestrator | 2025-06-02 14:19:25 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:25.245244 | orchestrator | 2025-06-02 14:19:25 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:25.245433 | orchestrator | 2025-06-02 14:19:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:28.302641 | orchestrator | 2025-06-02 14:19:28 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:28.303782 | orchestrator | 2025-06-02 14:19:28 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:28.305992 | orchestrator | 2025-06-02 14:19:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:31.351057 | orchestrator | 2025-06-02 14:19:31 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:31.351950 | orchestrator | 2025-06-02 14:19:31 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:31.351977 | orchestrator | 2025-06-02 14:19:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:34.406957 | orchestrator | 2025-06-02 14:19:34 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:34.407064 | orchestrator | 2025-06-02 14:19:34 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:34.409055 | orchestrator | 2025-06-02 14:19:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:37.455391 | orchestrator | 2025-06-02 14:19:37 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:37.456486 | orchestrator | 2025-06-02 14:19:37 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:37.456766 | orchestrator | 2025-06-02 14:19:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:40.506301 | orchestrator | 2025-06-02 14:19:40 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:40.509782 | orchestrator | 2025-06-02 14:19:40 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:40.509854 | orchestrator | 2025-06-02 14:19:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:43.577292 | orchestrator | 2025-06-02 14:19:43 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:43.577394 | orchestrator | 2025-06-02 14:19:43 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:43.577409 | orchestrator | 2025-06-02 14:19:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:46.621912 | orchestrator | 2025-06-02 14:19:46 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:46.622934 | orchestrator | 2025-06-02 14:19:46 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:46.623165 | orchestrator | 2025-06-02 14:19:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:49.673007 | orchestrator | 2025-06-02 14:19:49 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:49.674976 | orchestrator | 2025-06-02 14:19:49 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:49.675036 | orchestrator | 2025-06-02 14:19:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:52.716144 | orchestrator | 2025-06-02 14:19:52 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:52.717360 | orchestrator | 2025-06-02 14:19:52 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:52.717421 | orchestrator | 2025-06-02 14:19:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:55.763556 | orchestrator | 2025-06-02 14:19:55 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:55.765456 | orchestrator | 2025-06-02 14:19:55 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:55.765990 | orchestrator | 2025-06-02 14:19:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:19:58.816663 | orchestrator | 2025-06-02 14:19:58 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:19:58.816827 | orchestrator | 2025-06-02 14:19:58 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:19:58.816843 | orchestrator | 2025-06-02 14:19:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:01.862303 | orchestrator | 2025-06-02 14:20:01 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:01.863271 | orchestrator | 2025-06-02 14:20:01 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:01.863322 | orchestrator | 2025-06-02 14:20:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:04.914691 | orchestrator | 2025-06-02 14:20:04 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:04.916308 | orchestrator | 2025-06-02 14:20:04 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:04.916680 | orchestrator | 2025-06-02 14:20:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:07.954127 | orchestrator | 2025-06-02 14:20:07 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:07.957343 | orchestrator | 2025-06-02 14:20:07 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:07.957375 | orchestrator | 2025-06-02 14:20:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:11.011071 | orchestrator | 2025-06-02 14:20:11 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:11.011575 | orchestrator | 2025-06-02 14:20:11 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:11.011844 | orchestrator | 2025-06-02 14:20:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:14.061014 | orchestrator | 2025-06-02 14:20:14 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:14.064826 | orchestrator | 2025-06-02 14:20:14 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:14.064859 | orchestrator | 2025-06-02 14:20:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:17.117371 | orchestrator | 2025-06-02 14:20:17 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:17.119743 | orchestrator | 2025-06-02 14:20:17 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:17.121934 | orchestrator | 2025-06-02 14:20:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:20.167335 | orchestrator | 2025-06-02 14:20:20 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:20.167764 | orchestrator | 2025-06-02 14:20:20 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:20.167870 | orchestrator | 2025-06-02 14:20:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:23.199314 | orchestrator | 2025-06-02 14:20:23 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:23.201245 | orchestrator | 2025-06-02 14:20:23 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:23.201635 | orchestrator | 2025-06-02 14:20:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:26.258900 | orchestrator | 2025-06-02 14:20:26 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:26.259010 | orchestrator | 2025-06-02 14:20:26 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:26.260131 | orchestrator | 2025-06-02 14:20:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:29.300959 | orchestrator | 2025-06-02 14:20:29 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:29.302846 | orchestrator | 2025-06-02 14:20:29 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:29.302875 | orchestrator | 2025-06-02 14:20:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:32.352761 | orchestrator | 2025-06-02 14:20:32 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:32.355044 | orchestrator | 2025-06-02 14:20:32 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:32.355489 | orchestrator | 2025-06-02 14:20:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:35.402657 | orchestrator | 2025-06-02 14:20:35 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:35.405594 | orchestrator | 2025-06-02 14:20:35 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:35.405640 | orchestrator | 2025-06-02 14:20:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:38.459315 | orchestrator | 2025-06-02 14:20:38 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:38.460346 | orchestrator | 2025-06-02 14:20:38 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:38.460382 | orchestrator | 2025-06-02 14:20:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:41.506464 | orchestrator | 2025-06-02 14:20:41 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:41.508976 | orchestrator | 2025-06-02 14:20:41 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:41.509249 | orchestrator | 2025-06-02 14:20:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:44.553044 | orchestrator | 2025-06-02 14:20:44 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:44.553610 | orchestrator | 2025-06-02 14:20:44 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:44.553699 | orchestrator | 2025-06-02 14:20:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:47.605444 | orchestrator | 2025-06-02 14:20:47 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:47.606836 | orchestrator | 2025-06-02 14:20:47 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:47.606903 | orchestrator | 2025-06-02 14:20:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:50.665956 | orchestrator | 2025-06-02 14:20:50 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:50.668993 | orchestrator | 2025-06-02 14:20:50 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:50.669034 | orchestrator | 2025-06-02 14:20:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:53.705468 | orchestrator | 2025-06-02 14:20:53 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:53.707518 | orchestrator | 2025-06-02 14:20:53 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:53.707559 | orchestrator | 2025-06-02 14:20:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:56.764312 | orchestrator | 2025-06-02 14:20:56 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:56.771066 | orchestrator | 2025-06-02 14:20:56 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:56.771110 | orchestrator | 2025-06-02 14:20:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:20:59.810730 | orchestrator | 2025-06-02 14:20:59 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:20:59.811872 | orchestrator | 2025-06-02 14:20:59 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:20:59.811899 | orchestrator | 2025-06-02 14:20:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:02.872348 | orchestrator | 2025-06-02 14:21:02 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:02.874141 | orchestrator | 2025-06-02 14:21:02 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:02.874186 | orchestrator | 2025-06-02 14:21:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:05.912152 | orchestrator | 2025-06-02 14:21:05 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:05.912263 | orchestrator | 2025-06-02 14:21:05 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:05.912279 | orchestrator | 2025-06-02 14:21:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:08.953004 | orchestrator | 2025-06-02 14:21:08 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:08.959026 | orchestrator | 2025-06-02 14:21:08 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:08.959108 | orchestrator | 2025-06-02 14:21:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:12.016190 | orchestrator | 2025-06-02 14:21:12 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:12.019331 | orchestrator | 2025-06-02 14:21:12 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:12.019432 | orchestrator | 2025-06-02 14:21:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:15.059725 | orchestrator | 2025-06-02 14:21:15 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:15.061722 | orchestrator | 2025-06-02 14:21:15 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:15.061765 | orchestrator | 2025-06-02 14:21:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:18.114339 | orchestrator | 2025-06-02 14:21:18 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:18.116079 | orchestrator | 2025-06-02 14:21:18 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:18.116319 | orchestrator | 2025-06-02 14:21:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:21.158296 | orchestrator | 2025-06-02 14:21:21 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:21.159019 | orchestrator | 2025-06-02 14:21:21 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:21.159059 | orchestrator | 2025-06-02 14:21:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:24.210626 | orchestrator | 2025-06-02 14:21:24 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:24.211484 | orchestrator | 2025-06-02 14:21:24 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:24.215266 | orchestrator | 2025-06-02 14:21:24 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:24.215319 | orchestrator | 2025-06-02 14:21:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:27.260119 | orchestrator | 2025-06-02 14:21:27 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:27.260902 | orchestrator | 2025-06-02 14:21:27 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:27.262094 | orchestrator | 2025-06-02 14:21:27 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:27.262202 | orchestrator | 2025-06-02 14:21:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:30.314245 | orchestrator | 2025-06-02 14:21:30 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:30.314996 | orchestrator | 2025-06-02 14:21:30 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:30.316351 | orchestrator | 2025-06-02 14:21:30 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:30.316551 | orchestrator | 2025-06-02 14:21:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:33.362947 | orchestrator | 2025-06-02 14:21:33 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:33.363445 | orchestrator | 2025-06-02 14:21:33 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:33.364273 | orchestrator | 2025-06-02 14:21:33 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:33.364298 | orchestrator | 2025-06-02 14:21:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:36.420791 | orchestrator | 2025-06-02 14:21:36 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:36.423090 | orchestrator | 2025-06-02 14:21:36 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:36.424918 | orchestrator | 2025-06-02 14:21:36 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:36.424952 | orchestrator | 2025-06-02 14:21:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:39.472520 | orchestrator | 2025-06-02 14:21:39 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:39.473267 | orchestrator | 2025-06-02 14:21:39 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state STARTED 2025-06-02 14:21:39.476501 | orchestrator | 2025-06-02 14:21:39 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:39.476538 | orchestrator | 2025-06-02 14:21:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:42.520931 | orchestrator | 2025-06-02 14:21:42 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:42.521582 | orchestrator | 2025-06-02 14:21:42 | INFO  | Task c47994ba-ce59-4aea-841e-4aa92c21e7cb is in state SUCCESS 2025-06-02 14:21:42.524349 | orchestrator | 2025-06-02 14:21:42 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:42.524563 | orchestrator | 2025-06-02 14:21:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:45.562820 | orchestrator | 2025-06-02 14:21:45 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:45.564447 | orchestrator | 2025-06-02 14:21:45 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:45.565157 | orchestrator | 2025-06-02 14:21:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:48.606739 | orchestrator | 2025-06-02 14:21:48 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:48.608855 | orchestrator | 2025-06-02 14:21:48 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:48.608920 | orchestrator | 2025-06-02 14:21:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:51.663432 | orchestrator | 2025-06-02 14:21:51 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:51.666687 | orchestrator | 2025-06-02 14:21:51 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state STARTED 2025-06-02 14:21:51.666760 | orchestrator | 2025-06-02 14:21:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:54.718142 | orchestrator | 2025-06-02 14:21:54 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:54.723001 | orchestrator | 2025-06-02 14:21:54 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:21:54.729488 | orchestrator | 2025-06-02 14:21:54 | INFO  | Task 3911ab69-5ac6-4616-a206-7960e3f52b0b is in state SUCCESS 2025-06-02 14:21:54.734313 | orchestrator | 2025-06-02 14:21:54.734364 | orchestrator | None 2025-06-02 14:21:54.734378 | orchestrator | 2025-06-02 14:21:54.734389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:21:54.734402 | orchestrator | 2025-06-02 14:21:54.734413 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:21:54.734424 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-02 14:21:54.734468 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.734482 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.734515 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.734526 | orchestrator | 2025-06-02 14:21:54.734538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:21:54.734550 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.340) 0:00:00.582 *********** 2025-06-02 14:21:54.734561 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 14:21:54.734572 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 14:21:54.734583 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 14:21:54.734594 | orchestrator | 2025-06-02 14:21:54.734605 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 14:21:54.734616 | orchestrator | 2025-06-02 14:21:54.734650 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 14:21:54.734664 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.945) 0:00:01.528 *********** 2025-06-02 14:21:54.734682 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.734700 | orchestrator | 2025-06-02 14:21:54.734729 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 14:21:54.734788 | orchestrator | Monday 02 June 2025 14:15:33 +0000 (0:00:01.433) 0:00:02.961 *********** 2025-06-02 14:21:54.734809 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.734829 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.734849 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.734869 | orchestrator | 2025-06-02 14:21:54.735183 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 14:21:54.735197 | orchestrator | Monday 02 June 2025 14:15:34 +0000 (0:00:01.281) 0:00:04.243 *********** 2025-06-02 14:21:54.735210 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.735221 | orchestrator | 2025-06-02 14:21:54.735232 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 14:21:54.735243 | orchestrator | Monday 02 June 2025 14:15:35 +0000 (0:00:00.995) 0:00:05.238 *********** 2025-06-02 14:21:54.735254 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.735265 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.735276 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.735287 | orchestrator | 2025-06-02 14:21:54.735297 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 14:21:54.735308 | orchestrator | Monday 02 June 2025 14:15:36 +0000 (0:00:01.303) 0:00:06.541 *********** 2025-06-02 14:21:54.735319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735341 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 14:21:54.735383 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 14:21:54.735396 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 14:21:54.735407 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 14:21:54.735418 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 14:21:54.735428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 14:21:54.735439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 14:21:54.735462 | orchestrator | 2025-06-02 14:21:54.735473 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 14:21:54.735546 | orchestrator | Monday 02 June 2025 14:15:41 +0000 (0:00:04.311) 0:00:10.853 *********** 2025-06-02 14:21:54.735560 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 14:21:54.735571 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 14:21:54.735582 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 14:21:54.735593 | orchestrator | 2025-06-02 14:21:54.735604 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 14:21:54.735675 | orchestrator | Monday 02 June 2025 14:15:42 +0000 (0:00:00.980) 0:00:11.833 *********** 2025-06-02 14:21:54.735687 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 14:21:54.735699 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 14:21:54.735710 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 14:21:54.735721 | orchestrator | 2025-06-02 14:21:54.735731 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 14:21:54.735742 | orchestrator | Monday 02 June 2025 14:15:44 +0000 (0:00:01.832) 0:00:13.666 *********** 2025-06-02 14:21:54.735837 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 14:21:54.735851 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.735878 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 14:21:54.735889 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.735901 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 14:21:54.736013 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.736042 | orchestrator | 2025-06-02 14:21:54.736054 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 14:21:54.736065 | orchestrator | Monday 02 June 2025 14:15:45 +0000 (0:00:01.519) 0:00:15.185 *********** 2025-06-02 14:21:54.736080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.736257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.736270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.736282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.736293 | orchestrator | 2025-06-02 14:21:54.736304 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 14:21:54.736361 | orchestrator | Monday 02 June 2025 14:15:47 +0000 (0:00:02.167) 0:00:17.352 *********** 2025-06-02 14:21:54.736372 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.736384 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.736394 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.736405 | orchestrator | 2025-06-02 14:21:54.736416 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 14:21:54.736427 | orchestrator | Monday 02 June 2025 14:15:49 +0000 (0:00:01.437) 0:00:18.789 *********** 2025-06-02 14:21:54.736504 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 14:21:54.736516 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 14:21:54.736527 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 14:21:54.736538 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 14:21:54.736549 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 14:21:54.736559 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 14:21:54.736570 | orchestrator | 2025-06-02 14:21:54.736581 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 14:21:54.736592 | orchestrator | Monday 02 June 2025 14:15:52 +0000 (0:00:03.390) 0:00:22.180 *********** 2025-06-02 14:21:54.736603 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.736614 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.736624 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.736664 | orchestrator | 2025-06-02 14:21:54.736676 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 14:21:54.736686 | orchestrator | Monday 02 June 2025 14:15:54 +0000 (0:00:01.500) 0:00:23.680 *********** 2025-06-02 14:21:54.736697 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.736708 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.736719 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.736729 | orchestrator | 2025-06-02 14:21:54.736740 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 14:21:54.736751 | orchestrator | Monday 02 June 2025 14:15:55 +0000 (0:00:01.710) 0:00:25.391 *********** 2025-06-02 14:21:54.736762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.736789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.736802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.736815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.736833 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.736845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.736857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.736868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.736885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.736896 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.736916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.736928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.736946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.736957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.736969 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.736980 | orchestrator | 2025-06-02 14:21:54.737009 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 14:21:54.737168 | orchestrator | Monday 02 June 2025 14:15:56 +0000 (0:00:00.532) 0:00:25.923 *********** 2025-06-02 14:21:54.737182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.737366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.737378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.737405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.737424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.737459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6', '__omit_place_holder__db56774db2d169c60fb82a6b0aeedc17ff21b4d6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 14:21:54.737471 | orchestrator | 2025-06-02 14:21:54.737482 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 14:21:54.737493 | orchestrator | Monday 02 June 2025 14:15:59 +0000 (0:00:02.966) 0:00:28.890 *********** 2025-06-02 14:21:54.737504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.737591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.737602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.737614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.737625 | orchestrator | 2025-06-02 14:21:54.737711 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 14:21:54.737723 | orchestrator | Monday 02 June 2025 14:16:02 +0000 (0:00:03.505) 0:00:32.395 *********** 2025-06-02 14:21:54.737734 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 14:21:54.737745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 14:21:54.737756 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 14:21:54.737767 | orchestrator | 2025-06-02 14:21:54.737778 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 14:21:54.737789 | orchestrator | Monday 02 June 2025 14:16:04 +0000 (0:00:01.674) 0:00:34.070 *********** 2025-06-02 14:21:54.737800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 14:21:54.737818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 14:21:54.737836 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 14:21:54.737847 | orchestrator | 2025-06-02 14:21:54.737858 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 14:21:54.737874 | orchestrator | Monday 02 June 2025 14:16:09 +0000 (0:00:05.243) 0:00:39.314 *********** 2025-06-02 14:21:54.737893 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.737912 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.737931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.737950 | orchestrator | 2025-06-02 14:21:54.737968 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 14:21:54.737986 | orchestrator | Monday 02 June 2025 14:16:10 +0000 (0:00:00.651) 0:00:39.965 *********** 2025-06-02 14:21:54.738005 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 14:21:54.738101 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 14:21:54.738124 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 14:21:54.738144 | orchestrator | 2025-06-02 14:21:54.738160 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 14:21:54.738171 | orchestrator | Monday 02 June 2025 14:16:12 +0000 (0:00:02.601) 0:00:42.566 *********** 2025-06-02 14:21:54.738180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 14:21:54.738190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 14:21:54.738200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 14:21:54.738209 | orchestrator | 2025-06-02 14:21:54.738219 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 14:21:54.738228 | orchestrator | Monday 02 June 2025 14:16:14 +0000 (0:00:01.668) 0:00:44.235 *********** 2025-06-02 14:21:54.738238 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 14:21:54.738248 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 14:21:54.738257 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 14:21:54.738267 | orchestrator | 2025-06-02 14:21:54.738277 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 14:21:54.738286 | orchestrator | Monday 02 June 2025 14:16:16 +0000 (0:00:01.569) 0:00:45.804 *********** 2025-06-02 14:21:54.738296 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 14:21:54.738305 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 14:21:54.738315 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 14:21:54.738324 | orchestrator | 2025-06-02 14:21:54.738334 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 14:21:54.738343 | orchestrator | Monday 02 June 2025 14:16:17 +0000 (0:00:01.458) 0:00:47.263 *********** 2025-06-02 14:21:54.738353 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.738363 | orchestrator | 2025-06-02 14:21:54.738372 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 14:21:54.738382 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.640) 0:00:47.903 *********** 2025-06-02 14:21:54.738454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.738593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.738623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.738672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.738683 | orchestrator | 2025-06-02 14:21:54.738693 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 14:21:54.738703 | orchestrator | Monday 02 June 2025 14:16:21 +0000 (0:00:03.164) 0:00:51.068 *********** 2025-06-02 14:21:54.738723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.738734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.738744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.738754 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.738765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.738781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.738791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.738801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.738817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.738843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.738854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.738865 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.738875 | orchestrator | 2025-06-02 14:21:54.738885 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 14:21:54.738894 | orchestrator | Monday 02 June 2025 14:16:22 +0000 (0:00:01.000) 0:00:52.069 *********** 2025-06-02 14:21:54.738905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.738921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.738931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.738941 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.738955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.738971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.738982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.738992 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.739002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739039 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.739048 | orchestrator | 2025-06-02 14:21:54.739058 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 14:21:54.739068 | orchestrator | Monday 02 June 2025 14:16:24 +0000 (0:00:01.558) 0:00:53.627 *********** 2025-06-02 14:21:54.739078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739118 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.739128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739170 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.739187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.739280 | orchestrator | 2025-06-02 14:21:54.739296 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 14:21:54.739313 | orchestrator | Monday 02 June 2025 14:16:24 +0000 (0:00:00.630) 0:00:54.258 *********** 2025-06-02 14:21:54.739328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739384 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.739405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.739554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.739650 | orchestrator | 2025-06-02 14:21:54.739669 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 14:21:54.739686 | orchestrator | Monday 02 June 2025 14:16:25 +0000 (0:00:00.786) 0:00:55.044 *********** 2025-06-02 14:21:54.739702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.739719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.739736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.739752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.740890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.740913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.740931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.740939 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.740948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.740956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.740964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.740972 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.740980 | orchestrator | 2025-06-02 14:21:54.740989 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 14:21:54.740997 | orchestrator | Monday 02 June 2025 14:16:26 +0000 (0:00:01.445) 0:00:56.490 *********** 2025-06-02 14:21:54.741009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741051 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.741060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741084 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.741093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.741140 | orchestrator | 2025-06-02 14:21:54.741148 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 14:21:54.741156 | orchestrator | Monday 02 June 2025 14:16:27 +0000 (0:00:00.647) 0:00:57.138 *********** 2025-06-02 14:21:54.741164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.741198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741239 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.741248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741273 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.741281 | orchestrator | 2025-06-02 14:21:54.741289 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 14:21:54.741297 | orchestrator | Monday 02 June 2025 14:16:28 +0000 (0:00:00.626) 0:00:57.765 *********** 2025-06-02 14:21:54.741305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741339 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.741352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741377 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.741385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 14:21:54.741393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 14:21:54.741402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 14:21:54.741415 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.741423 | orchestrator | 2025-06-02 14:21:54.741434 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 14:21:54.741442 | orchestrator | Monday 02 June 2025 14:16:31 +0000 (0:00:03.085) 0:01:00.851 *********** 2025-06-02 14:21:54.741450 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 14:21:54.741459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 14:21:54.741471 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 14:21:54.741479 | orchestrator | 2025-06-02 14:21:54.741487 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 14:21:54.741495 | orchestrator | Monday 02 June 2025 14:16:32 +0000 (0:00:01.367) 0:01:02.218 *********** 2025-06-02 14:21:54.741503 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 14:21:54.741511 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 14:21:54.741520 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 14:21:54.741527 | orchestrator | 2025-06-02 14:21:54.741536 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 14:21:54.741544 | orchestrator | Monday 02 June 2025 14:16:34 +0000 (0:00:01.371) 0:01:03.589 *********** 2025-06-02 14:21:54.741551 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 14:21:54.741559 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 14:21:54.741568 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 14:21:54.741576 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 14:21:54.741583 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.741591 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 14:21:54.741599 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.741607 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 14:21:54.741615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.741623 | orchestrator | 2025-06-02 14:21:54.741664 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 14:21:54.741673 | orchestrator | Monday 02 June 2025 14:16:34 +0000 (0:00:00.876) 0:01:04.466 *********** 2025-06-02 14:21:54.741681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 14:21:54.741750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.741759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.741772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 14:21:54.741780 | orchestrator | 2025-06-02 14:21:54.741788 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 14:21:54.741796 | orchestrator | Monday 02 June 2025 14:16:37 +0000 (0:00:02.607) 0:01:07.074 *********** 2025-06-02 14:21:54.741804 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.741813 | orchestrator | 2025-06-02 14:21:54.741820 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 14:21:54.741829 | orchestrator | Monday 02 June 2025 14:16:38 +0000 (0:00:00.831) 0:01:07.906 *********** 2025-06-02 14:21:54.741847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 14:21:54.741863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 14:21:54.741876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.741890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.741911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.741934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.741953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.741976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.741990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 14:21:54.742001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.742047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742066 | orchestrator | 2025-06-02 14:21:54.742074 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 14:21:54.742082 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:03.831) 0:01:11.737 *********** 2025-06-02 14:21:54.742095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 14:21:54.742110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.742118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.742150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 14:21:54.742158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.742166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742187 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.742200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 14:21:54.742209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.742223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742239 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.742247 | orchestrator | 2025-06-02 14:21:54.742255 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 14:21:54.742263 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:00.700) 0:01:12.438 *********** 2025-06-02 14:21:54.742272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742289 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.742297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.742321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 14:21:54.742337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.742345 | orchestrator | 2025-06-02 14:21:54.742357 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 14:21:54.742366 | orchestrator | Monday 02 June 2025 14:16:44 +0000 (0:00:01.223) 0:01:13.661 *********** 2025-06-02 14:21:54.742374 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.742381 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.742389 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.742397 | orchestrator | 2025-06-02 14:21:54.742405 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 14:21:54.742412 | orchestrator | Monday 02 June 2025 14:16:45 +0000 (0:00:01.288) 0:01:14.950 *********** 2025-06-02 14:21:54.742426 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.742434 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.742442 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.742449 | orchestrator | 2025-06-02 14:21:54.742457 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 14:21:54.742465 | orchestrator | Monday 02 June 2025 14:16:47 +0000 (0:00:01.892) 0:01:16.842 *********** 2025-06-02 14:21:54.742473 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.742481 | orchestrator | 2025-06-02 14:21:54.742488 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 14:21:54.742496 | orchestrator | Monday 02 June 2025 14:16:47 +0000 (0:00:00.656) 0:01:17.499 *********** 2025-06-02 14:21:54.742546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.742563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.742572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.742652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742669 | orchestrator | 2025-06-02 14:21:54.742677 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 14:21:54.742685 | orchestrator | Monday 02 June 2025 14:16:51 +0000 (0:00:04.065) 0:01:21.564 *********** 2025-06-02 14:21:54.742703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.742720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742736 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.742745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.742754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.742784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.742809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.742817 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.742825 | orchestrator | 2025-06-02 14:21:54.742833 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 14:21:54.742841 | orchestrator | Monday 02 June 2025 14:16:52 +0000 (0:00:00.703) 0:01:22.268 *********** 2025-06-02 14:21:54.742849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742867 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.742875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742891 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.742899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 14:21:54.742921 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.742929 | orchestrator | 2025-06-02 14:21:54.742937 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 14:21:54.742948 | orchestrator | Monday 02 June 2025 14:16:53 +0000 (0:00:00.877) 0:01:23.145 *********** 2025-06-02 14:21:54.742956 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.742964 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.742972 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.742980 | orchestrator | 2025-06-02 14:21:54.742988 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 14:21:54.742996 | orchestrator | Monday 02 June 2025 14:16:55 +0000 (0:00:02.271) 0:01:25.417 *********** 2025-06-02 14:21:54.743004 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.743012 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.743020 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.743028 | orchestrator | 2025-06-02 14:21:54.743040 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 14:21:54.743049 | orchestrator | Monday 02 June 2025 14:16:58 +0000 (0:00:02.200) 0:01:27.617 *********** 2025-06-02 14:21:54.743057 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.743064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.743072 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.743080 | orchestrator | 2025-06-02 14:21:54.743088 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 14:21:54.743096 | orchestrator | Monday 02 June 2025 14:16:58 +0000 (0:00:00.342) 0:01:27.959 *********** 2025-06-02 14:21:54.743104 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.743111 | orchestrator | 2025-06-02 14:21:54.743119 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 14:21:54.743127 | orchestrator | Monday 02 June 2025 14:16:59 +0000 (0:00:00.677) 0:01:28.637 *********** 2025-06-02 14:21:54.743135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 14:21:54.743145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 14:21:54.743153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 14:21:54.743165 | orchestrator | 2025-06-02 14:21:54.743173 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 14:21:54.743181 | orchestrator | Monday 02 June 2025 14:17:02 +0000 (0:00:02.963) 0:01:31.600 *********** 2025-06-02 14:21:54.745244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 14:21:54.745263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.745272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 14:21:54.745280 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.745289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 14:21:54.745297 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.745305 | orchestrator | 2025-06-02 14:21:54.745313 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 14:21:54.745322 | orchestrator | Monday 02 June 2025 14:17:03 +0000 (0:00:01.569) 0:01:33.170 *********** 2025-06-02 14:21:54.745339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745358 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.745366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745387 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.745413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 14:21:54.745430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.745443 | orchestrator | 2025-06-02 14:21:54.745458 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 14:21:54.745471 | orchestrator | Monday 02 June 2025 14:17:05 +0000 (0:00:02.059) 0:01:35.229 *********** 2025-06-02 14:21:54.745485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.745498 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.745512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.745526 | orchestrator | 2025-06-02 14:21:54.745540 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 14:21:54.745553 | orchestrator | Monday 02 June 2025 14:17:06 +0000 (0:00:00.962) 0:01:36.192 *********** 2025-06-02 14:21:54.745562 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.745570 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.745578 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.745586 | orchestrator | 2025-06-02 14:21:54.745594 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 14:21:54.745602 | orchestrator | Monday 02 June 2025 14:17:07 +0000 (0:00:01.007) 0:01:37.199 *********** 2025-06-02 14:21:54.745616 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.745624 | orchestrator | 2025-06-02 14:21:54.745659 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 14:21:54.745671 | orchestrator | Monday 02 June 2025 14:17:08 +0000 (0:00:00.975) 0:01:38.175 *********** 2025-06-02 14:21:54.745685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.745694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.745808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.745860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745890 | orchestrator | 2025-06-02 14:21:54.745898 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 14:21:54.745906 | orchestrator | Monday 02 June 2025 14:17:11 +0000 (0:00:03.355) 0:01:41.530 *********** 2025-06-02 14:21:54.745914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.745926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.745972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.745980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.745992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746077 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.746086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.746094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746131 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.746139 | orchestrator | 2025-06-02 14:21:54.746151 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 14:21:54.746160 | orchestrator | Monday 02 June 2025 14:17:13 +0000 (0:00:01.235) 0:01:42.766 *********** 2025-06-02 14:21:54.746169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746216 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.746224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746232 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.746240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 14:21:54.746267 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.746274 | orchestrator | 2025-06-02 14:21:54.746282 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 14:21:54.746290 | orchestrator | Monday 02 June 2025 14:17:14 +0000 (0:00:01.023) 0:01:43.789 *********** 2025-06-02 14:21:54.746298 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.746306 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.746314 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.746322 | orchestrator | 2025-06-02 14:21:54.746338 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 14:21:54.746346 | orchestrator | Monday 02 June 2025 14:17:15 +0000 (0:00:01.403) 0:01:45.192 *********** 2025-06-02 14:21:54.746354 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.746362 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.746369 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.746377 | orchestrator | 2025-06-02 14:21:54.746385 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 14:21:54.746393 | orchestrator | Monday 02 June 2025 14:17:17 +0000 (0:00:02.049) 0:01:47.242 *********** 2025-06-02 14:21:54.746401 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.746409 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.746417 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.746425 | orchestrator | 2025-06-02 14:21:54.746433 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 14:21:54.746441 | orchestrator | Monday 02 June 2025 14:17:18 +0000 (0:00:00.606) 0:01:47.849 *********** 2025-06-02 14:21:54.746449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.746457 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.746464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.746472 | orchestrator | 2025-06-02 14:21:54.746480 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 14:21:54.746488 | orchestrator | Monday 02 June 2025 14:17:18 +0000 (0:00:00.397) 0:01:48.246 *********** 2025-06-02 14:21:54.746496 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.746504 | orchestrator | 2025-06-02 14:21:54.746511 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 14:21:54.746519 | orchestrator | Monday 02 June 2025 14:17:19 +0000 (0:00:00.782) 0:01:49.029 *********** 2025-06-02 14:21:54.746538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 14:21:54.746585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.746601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 14:21:54.746754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.746769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 14:21:54.746893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.746906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.746991 | orchestrator | 2025-06-02 14:21:54.747006 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 14:21:54.747043 | orchestrator | Monday 02 June 2025 14:17:24 +0000 (0:00:04.667) 0:01:53.696 *********** 2025-06-02 14:21:54.747080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 14:21:54.747095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.747110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.747230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 14:21:54.747239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.747247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747323 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.747332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 14:21:54.747340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 14:21:54.747354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.747431 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.747445 | orchestrator | 2025-06-02 14:21:54.747459 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 14:21:54.747473 | orchestrator | Monday 02 June 2025 14:17:24 +0000 (0:00:00.835) 0:01:54.531 *********** 2025-06-02 14:21:54.747486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747524 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.747538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747582 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.747609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 14:21:54.747658 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.747672 | orchestrator | 2025-06-02 14:21:54.747685 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 14:21:54.747698 | orchestrator | Monday 02 June 2025 14:17:25 +0000 (0:00:01.007) 0:01:55.539 *********** 2025-06-02 14:21:54.747711 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.747724 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.747737 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.747752 | orchestrator | 2025-06-02 14:21:54.747764 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 14:21:54.747776 | orchestrator | Monday 02 June 2025 14:17:27 +0000 (0:00:01.777) 0:01:57.316 *********** 2025-06-02 14:21:54.747785 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.747792 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.747800 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.747808 | orchestrator | 2025-06-02 14:21:54.747816 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 14:21:54.747837 | orchestrator | Monday 02 June 2025 14:17:29 +0000 (0:00:02.043) 0:01:59.360 *********** 2025-06-02 14:21:54.747849 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.747863 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.747872 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.747879 | orchestrator | 2025-06-02 14:21:54.747887 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 14:21:54.747895 | orchestrator | Monday 02 June 2025 14:17:30 +0000 (0:00:00.350) 0:01:59.711 *********** 2025-06-02 14:21:54.747903 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.747911 | orchestrator | 2025-06-02 14:21:54.747924 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 14:21:54.747932 | orchestrator | Monday 02 June 2025 14:17:30 +0000 (0:00:00.771) 0:02:00.482 *********** 2025-06-02 14:21:54.747965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:21:54.747986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:21:54.748021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:21:54.748112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748145 | orchestrator | 2025-06-02 14:21:54.748154 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 14:21:54.748162 | orchestrator | Monday 02 June 2025 14:17:35 +0000 (0:00:04.193) 0:02:04.676 *********** 2025-06-02 14:21:54.748190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:21:54.748201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748215 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.748227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:21:54.748251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748265 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.748274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:21:54.748293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.748307 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.748315 | orchestrator | 2025-06-02 14:21:54.748323 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 14:21:54.748331 | orchestrator | Monday 02 June 2025 14:17:38 +0000 (0:00:03.327) 0:02:08.003 *********** 2025-06-02 14:21:54.748339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.748385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748402 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.748414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 14:21:54.748451 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.748459 | orchestrator | 2025-06-02 14:21:54.748467 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 14:21:54.748475 | orchestrator | Monday 02 June 2025 14:17:42 +0000 (0:00:03.986) 0:02:11.990 *********** 2025-06-02 14:21:54.748483 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.748491 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.748499 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.748507 | orchestrator | 2025-06-02 14:21:54.748526 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 14:21:54.748534 | orchestrator | Monday 02 June 2025 14:17:43 +0000 (0:00:01.432) 0:02:13.422 *********** 2025-06-02 14:21:54.748542 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.748549 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.748557 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.748565 | orchestrator | 2025-06-02 14:21:54.748573 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 14:21:54.748581 | orchestrator | Monday 02 June 2025 14:17:46 +0000 (0:00:02.436) 0:02:15.859 *********** 2025-06-02 14:21:54.748597 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.748613 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.748621 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.748775 | orchestrator | 2025-06-02 14:21:54.748796 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 14:21:54.748804 | orchestrator | Monday 02 June 2025 14:17:46 +0000 (0:00:00.349) 0:02:16.209 *********** 2025-06-02 14:21:54.748812 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.748820 | orchestrator | 2025-06-02 14:21:54.748827 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 14:21:54.748835 | orchestrator | Monday 02 June 2025 14:17:47 +0000 (0:00:01.080) 0:02:17.290 *********** 2025-06-02 14:21:54.748844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 14:21:54.748854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 14:21:54.748876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 14:21:54.748902 | orchestrator | 2025-06-02 14:21:54.748916 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 14:21:54.748924 | orchestrator | Monday 02 June 2025 14:17:53 +0000 (0:00:05.324) 0:02:22.615 *********** 2025-06-02 14:21:54.748952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 14:21:54.748962 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.748970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 14:21:54.748978 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.748987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 14:21:54.748995 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749003 | orchestrator | 2025-06-02 14:21:54.749011 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 14:21:54.749018 | orchestrator | Monday 02 June 2025 14:17:53 +0000 (0:00:00.694) 0:02:23.309 *********** 2025-06-02 14:21:54.749027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749044 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.749052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749073 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.749081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 14:21:54.749097 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749105 | orchestrator | 2025-06-02 14:21:54.749113 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 14:21:54.749121 | orchestrator | Monday 02 June 2025 14:17:54 +0000 (0:00:00.839) 0:02:24.149 *********** 2025-06-02 14:21:54.749129 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.749136 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.749144 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.749152 | orchestrator | 2025-06-02 14:21:54.749164 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 14:21:54.749172 | orchestrator | Monday 02 June 2025 14:17:56 +0000 (0:00:01.746) 0:02:25.895 *********** 2025-06-02 14:21:54.749180 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.749188 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.749195 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.749203 | orchestrator | 2025-06-02 14:21:54.749211 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 14:21:54.749219 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:01.961) 0:02:27.857 *********** 2025-06-02 14:21:54.749227 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.749235 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.749257 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749271 | orchestrator | 2025-06-02 14:21:54.749284 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 14:21:54.749298 | orchestrator | Monday 02 June 2025 14:17:58 +0000 (0:00:00.305) 0:02:28.162 *********** 2025-06-02 14:21:54.749310 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.749323 | orchestrator | 2025-06-02 14:21:54.749336 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 14:21:54.749344 | orchestrator | Monday 02 June 2025 14:17:59 +0000 (0:00:00.672) 0:02:28.834 *********** 2025-06-02 14:21:54.749354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:21:54.749396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:21:54.749407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:21:54.749431 | orchestrator | 2025-06-02 14:21:54.749440 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 14:21:54.749448 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:05.309) 0:02:34.144 *********** 2025-06-02 14:21:54.749475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:21:54.749485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.749494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:21:54.749508 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:21:54.749546 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.749554 | orchestrator | 2025-06-02 14:21:54.749567 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 14:21:54.749587 | orchestrator | Monday 02 June 2025 14:18:05 +0000 (0:00:01.352) 0:02:35.497 *********** 2025-06-02 14:21:54.749596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 14:21:54.749671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749680 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.749705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 14:21:54.749753 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.749762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 14:21:54.749787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 14:21:54.749796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 14:21:54.749804 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749813 | orchestrator | 2025-06-02 14:21:54.749822 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 14:21:54.749843 | orchestrator | Monday 02 June 2025 14:18:07 +0000 (0:00:01.222) 0:02:36.720 *********** 2025-06-02 14:21:54.749852 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.749860 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.749869 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.749878 | orchestrator | 2025-06-02 14:21:54.749886 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 14:21:54.749895 | orchestrator | Monday 02 June 2025 14:18:08 +0000 (0:00:01.528) 0:02:38.248 *********** 2025-06-02 14:21:54.749903 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.749912 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.749921 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.749930 | orchestrator | 2025-06-02 14:21:54.749938 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 14:21:54.749947 | orchestrator | Monday 02 June 2025 14:18:10 +0000 (0:00:02.238) 0:02:40.486 *********** 2025-06-02 14:21:54.749956 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.749964 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.749973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.749982 | orchestrator | 2025-06-02 14:21:54.749990 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 14:21:54.749999 | orchestrator | Monday 02 June 2025 14:18:11 +0000 (0:00:00.339) 0:02:40.825 *********** 2025-06-02 14:21:54.750008 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.750060 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.750071 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.750080 | orchestrator | 2025-06-02 14:21:54.750089 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 14:21:54.750098 | orchestrator | Monday 02 June 2025 14:18:11 +0000 (0:00:00.302) 0:02:41.128 *********** 2025-06-02 14:21:54.750106 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.750115 | orchestrator | 2025-06-02 14:21:54.750124 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 14:21:54.750132 | orchestrator | Monday 02 June 2025 14:18:12 +0000 (0:00:01.156) 0:02:42.284 *********** 2025-06-02 14:21:54.750164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:21:54.750182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:21:54.750192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:21:54.750267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750285 | orchestrator | 2025-06-02 14:21:54.750294 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 14:21:54.750303 | orchestrator | Monday 02 June 2025 14:18:16 +0000 (0:00:03.888) 0:02:46.173 *********** 2025-06-02 14:21:54.750313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:21:54.750326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.750376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:21:54.750385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750403 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.750419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:21:54.750448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:21:54.750458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:21:54.750467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.750484 | orchestrator | 2025-06-02 14:21:54.750493 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 14:21:54.750501 | orchestrator | Monday 02 June 2025 14:18:17 +0000 (0:00:00.641) 0:02:46.814 *********** 2025-06-02 14:21:54.750511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.750539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.750566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 14:21:54.750594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.750602 | orchestrator | 2025-06-02 14:21:54.750611 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 14:21:54.750620 | orchestrator | Monday 02 June 2025 14:18:18 +0000 (0:00:01.084) 0:02:47.899 *********** 2025-06-02 14:21:54.750649 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.750658 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.750676 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.750685 | orchestrator | 2025-06-02 14:21:54.750694 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 14:21:54.750702 | orchestrator | Monday 02 June 2025 14:18:19 +0000 (0:00:01.297) 0:02:49.197 *********** 2025-06-02 14:21:54.750711 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.750720 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.750728 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.750737 | orchestrator | 2025-06-02 14:21:54.750746 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 14:21:54.750754 | orchestrator | Monday 02 June 2025 14:18:21 +0000 (0:00:01.938) 0:02:51.136 *********** 2025-06-02 14:21:54.750763 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.750772 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.750791 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.750800 | orchestrator | 2025-06-02 14:21:54.750812 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 14:21:54.750821 | orchestrator | Monday 02 June 2025 14:18:21 +0000 (0:00:00.309) 0:02:51.445 *********** 2025-06-02 14:21:54.750830 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.750839 | orchestrator | 2025-06-02 14:21:54.750847 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 14:21:54.750856 | orchestrator | Monday 02 June 2025 14:18:23 +0000 (0:00:01.242) 0:02:52.687 *********** 2025-06-02 14:21:54.750881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 14:21:54.750892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.750902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 14:21:54.750920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.750948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 14:21:54.750959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.750968 | orchestrator | 2025-06-02 14:21:54.750976 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 14:21:54.750985 | orchestrator | Monday 02 June 2025 14:18:26 +0000 (0:00:03.236) 0:02:55.923 *********** 2025-06-02 14:21:54.750994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 14:21:54.751003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751018 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.751027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 14:21:54.751054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.751074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 14:21:54.751083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751097 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.751106 | orchestrator | 2025-06-02 14:21:54.751114 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 14:21:54.751123 | orchestrator | Monday 02 June 2025 14:18:26 +0000 (0:00:00.645) 0:02:56.569 *********** 2025-06-02 14:21:54.751132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751150 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.751159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751177 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.751185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 14:21:54.751214 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.751223 | orchestrator | 2025-06-02 14:21:54.751231 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 14:21:54.751240 | orchestrator | Monday 02 June 2025 14:18:28 +0000 (0:00:01.389) 0:02:57.958 *********** 2025-06-02 14:21:54.751249 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.751266 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.751275 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.751284 | orchestrator | 2025-06-02 14:21:54.751297 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 14:21:54.751306 | orchestrator | Monday 02 June 2025 14:18:29 +0000 (0:00:01.204) 0:02:59.163 *********** 2025-06-02 14:21:54.751314 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.751323 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.751332 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.751349 | orchestrator | 2025-06-02 14:21:54.751358 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 14:21:54.751367 | orchestrator | Monday 02 June 2025 14:18:31 +0000 (0:00:01.992) 0:03:01.155 *********** 2025-06-02 14:21:54.751392 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.751402 | orchestrator | 2025-06-02 14:21:54.751411 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 14:21:54.751419 | orchestrator | Monday 02 June 2025 14:18:32 +0000 (0:00:00.996) 0:03:02.152 *********** 2025-06-02 14:21:54.751428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 14:21:54.751447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 14:21:54.751504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 14:21:54.751548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751594 | orchestrator | 2025-06-02 14:21:54.751603 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 14:21:54.751612 | orchestrator | Monday 02 June 2025 14:18:36 +0000 (0:00:03.632) 0:03:05.784 *********** 2025-06-02 14:21:54.751621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 14:21:54.751647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.751690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 14:21:54.751743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751778 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.751787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 14:21:54.751796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.751859 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.751869 | orchestrator | 2025-06-02 14:21:54.751878 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 14:21:54.751886 | orchestrator | Monday 02 June 2025 14:18:36 +0000 (0:00:00.696) 0:03:06.480 *********** 2025-06-02 14:21:54.751895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751922 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.751931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.751957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 14:21:54.751975 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.751984 | orchestrator | 2025-06-02 14:21:54.751992 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 14:21:54.752001 | orchestrator | Monday 02 June 2025 14:18:37 +0000 (0:00:00.883) 0:03:07.364 *********** 2025-06-02 14:21:54.752010 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.752027 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.752036 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.752045 | orchestrator | 2025-06-02 14:21:54.752053 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 14:21:54.752062 | orchestrator | Monday 02 June 2025 14:18:39 +0000 (0:00:01.574) 0:03:08.938 *********** 2025-06-02 14:21:54.752071 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.752080 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.752088 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.752097 | orchestrator | 2025-06-02 14:21:54.752106 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 14:21:54.752114 | orchestrator | Monday 02 June 2025 14:18:41 +0000 (0:00:02.045) 0:03:10.984 *********** 2025-06-02 14:21:54.752123 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.752140 | orchestrator | 2025-06-02 14:21:54.752150 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 14:21:54.752159 | orchestrator | Monday 02 June 2025 14:18:42 +0000 (0:00:01.162) 0:03:12.147 *********** 2025-06-02 14:21:54.752167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:21:54.752176 | orchestrator | 2025-06-02 14:21:54.752185 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 14:21:54.752193 | orchestrator | Monday 02 June 2025 14:18:45 +0000 (0:00:02.899) 0:03:15.046 *********** 2025-06-02 14:21:54.752224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752250 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.752260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.752351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.752387 | orchestrator | 2025-06-02 14:21:54.752396 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 14:21:54.752405 | orchestrator | Monday 02 June 2025 14:18:48 +0000 (0:00:02.791) 0:03:17.837 *********** 2025-06-02 14:21:54.752419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752462 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.752471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752495 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.752524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:21:54.752536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 14:21:54.752545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.752553 | orchestrator | 2025-06-02 14:21:54.752562 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 14:21:54.752575 | orchestrator | Monday 02 June 2025 14:18:50 +0000 (0:00:02.049) 0:03:19.887 *********** 2025-06-02 14:21:54.752589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752730 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.752749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 14:21:54.752840 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.752849 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.752858 | orchestrator | 2025-06-02 14:21:54.752867 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 14:21:54.752876 | orchestrator | Monday 02 June 2025 14:18:52 +0000 (0:00:02.656) 0:03:22.543 *********** 2025-06-02 14:21:54.752885 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.752893 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.752902 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.752911 | orchestrator | 2025-06-02 14:21:54.752920 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 14:21:54.752928 | orchestrator | Monday 02 June 2025 14:18:55 +0000 (0:00:02.131) 0:03:24.675 *********** 2025-06-02 14:21:54.752937 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.752946 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.752954 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.752963 | orchestrator | 2025-06-02 14:21:54.752972 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 14:21:54.752987 | orchestrator | Monday 02 June 2025 14:18:56 +0000 (0:00:01.406) 0:03:26.082 *********** 2025-06-02 14:21:54.752995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753004 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753013 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753022 | orchestrator | 2025-06-02 14:21:54.753031 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 14:21:54.753040 | orchestrator | Monday 02 June 2025 14:18:56 +0000 (0:00:00.331) 0:03:26.413 *********** 2025-06-02 14:21:54.753050 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.753060 | orchestrator | 2025-06-02 14:21:54.753069 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 14:21:54.753079 | orchestrator | Monday 02 June 2025 14:18:57 +0000 (0:00:01.129) 0:03:27.542 *********** 2025-06-02 14:21:54.753090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 14:21:54.753105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 14:21:54.753132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 14:21:54.753143 | orchestrator | 2025-06-02 14:21:54.753153 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 14:21:54.753163 | orchestrator | Monday 02 June 2025 14:18:59 +0000 (0:00:01.808) 0:03:29.351 *********** 2025-06-02 14:21:54.753173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 14:21:54.753189 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 14:21:54.753209 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 14:21:54.753241 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753251 | orchestrator | 2025-06-02 14:21:54.753260 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 14:21:54.753270 | orchestrator | Monday 02 June 2025 14:19:00 +0000 (0:00:00.421) 0:03:29.773 *********** 2025-06-02 14:21:54.753280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 14:21:54.753290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 14:21:54.753315 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 14:21:54.753351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753360 | orchestrator | 2025-06-02 14:21:54.753370 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 14:21:54.753380 | orchestrator | Monday 02 June 2025 14:19:00 +0000 (0:00:00.583) 0:03:30.356 *********** 2025-06-02 14:21:54.753389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753399 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753418 | orchestrator | 2025-06-02 14:21:54.753428 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 14:21:54.753437 | orchestrator | Monday 02 June 2025 14:19:01 +0000 (0:00:00.733) 0:03:31.090 *********** 2025-06-02 14:21:54.753464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753484 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753494 | orchestrator | 2025-06-02 14:21:54.753504 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 14:21:54.753513 | orchestrator | Monday 02 June 2025 14:19:02 +0000 (0:00:01.247) 0:03:32.337 *********** 2025-06-02 14:21:54.753523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.753533 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.753543 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.753552 | orchestrator | 2025-06-02 14:21:54.753562 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 14:21:54.753572 | orchestrator | Monday 02 June 2025 14:19:03 +0000 (0:00:00.332) 0:03:32.670 *********** 2025-06-02 14:21:54.753582 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.753591 | orchestrator | 2025-06-02 14:21:54.753601 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 14:21:54.753611 | orchestrator | Monday 02 June 2025 14:19:04 +0000 (0:00:01.524) 0:03:34.195 *********** 2025-06-02 14:21:54.753621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 14:21:54.753654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.753740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.753775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.753792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.753870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.753892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.753902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.753929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.753962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 14:21:54.753984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.753994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.754138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.754210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 14:21:54.754246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.754334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.754355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.754409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.754469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.754547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.754571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754587 | orchestrator | 2025-06-02 14:21:54.754597 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 14:21:54.754616 | orchestrator | Monday 02 June 2025 14:19:09 +0000 (0:00:05.140) 0:03:39.335 *********** 2025-06-02 14:21:54.754673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 14:21:54.754685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.754747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 14:21:54.754784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.754873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.754883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.754959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.754984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.755015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.755026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.755046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.755082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.755133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.755143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.755178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 14:21:54.755193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 14:21:54.755267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.755350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 14:21:54.755388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 14:21:54.755439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 14:21:54.755449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.755466 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.755476 | orchestrator | 2025-06-02 14:21:54.755486 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 14:21:54.755496 | orchestrator | Monday 02 June 2025 14:19:11 +0000 (0:00:02.040) 0:03:41.376 *********** 2025-06-02 14:21:54.755505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755538 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.755548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.755577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 14:21:54.755596 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.755605 | orchestrator | 2025-06-02 14:21:54.755615 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 14:21:54.755625 | orchestrator | Monday 02 June 2025 14:19:13 +0000 (0:00:02.169) 0:03:43.545 *********** 2025-06-02 14:21:54.755650 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.755660 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.755669 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.755679 | orchestrator | 2025-06-02 14:21:54.755688 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 14:21:54.755698 | orchestrator | Monday 02 June 2025 14:19:15 +0000 (0:00:01.234) 0:03:44.779 *********** 2025-06-02 14:21:54.755708 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.755717 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.755727 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.755736 | orchestrator | 2025-06-02 14:21:54.755746 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 14:21:54.755759 | orchestrator | Monday 02 June 2025 14:19:17 +0000 (0:00:02.103) 0:03:46.883 *********** 2025-06-02 14:21:54.755769 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.755779 | orchestrator | 2025-06-02 14:21:54.755788 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 14:21:54.755798 | orchestrator | Monday 02 June 2025 14:19:18 +0000 (0:00:01.210) 0:03:48.093 *********** 2025-06-02 14:21:54.755823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.755841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.755852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.755862 | orchestrator | 2025-06-02 14:21:54.755872 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 14:21:54.755882 | orchestrator | Monday 02 June 2025 14:19:22 +0000 (0:00:03.592) 0:03:51.686 *********** 2025-06-02 14:21:54.755892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.755902 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.755931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.755949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.755959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.755969 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.755979 | orchestrator | 2025-06-02 14:21:54.755989 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 14:21:54.755998 | orchestrator | Monday 02 June 2025 14:19:22 +0000 (0:00:00.491) 0:03:52.178 *********** 2025-06-02 14:21:54.756008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756028 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.756038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756058 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.756068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.756097 | orchestrator | 2025-06-02 14:21:54.756107 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 14:21:54.756117 | orchestrator | Monday 02 June 2025 14:19:23 +0000 (0:00:00.789) 0:03:52.968 *********** 2025-06-02 14:21:54.756126 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.756136 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.756146 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.756162 | orchestrator | 2025-06-02 14:21:54.756171 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 14:21:54.756181 | orchestrator | Monday 02 June 2025 14:19:24 +0000 (0:00:01.532) 0:03:54.500 *********** 2025-06-02 14:21:54.756191 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.756200 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.756210 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.756220 | orchestrator | 2025-06-02 14:21:54.756234 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 14:21:54.756243 | orchestrator | Monday 02 June 2025 14:19:26 +0000 (0:00:02.056) 0:03:56.557 *********** 2025-06-02 14:21:54.756253 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.756263 | orchestrator | 2025-06-02 14:21:54.756272 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 14:21:54.756282 | orchestrator | Monday 02 June 2025 14:19:28 +0000 (0:00:01.362) 0:03:57.919 *********** 2025-06-02 14:21:54.756309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.756321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.756378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.756411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756437 | orchestrator | 2025-06-02 14:21:54.756446 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 14:21:54.756456 | orchestrator | Monday 02 June 2025 14:19:32 +0000 (0:00:04.324) 0:04:02.244 *********** 2025-06-02 14:21:54.756500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.756513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.756544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.756560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756584 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.756611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.756623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.756679 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.756690 | orchestrator | 2025-06-02 14:21:54.756700 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 14:21:54.756710 | orchestrator | Monday 02 June 2025 14:19:33 +0000 (0:00:01.018) 0:04:03.263 *********** 2025-06-02 14:21:54.756721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756764 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.756779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756837 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.756847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 14:21:54.756886 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.756896 | orchestrator | 2025-06-02 14:21:54.756906 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 14:21:54.756916 | orchestrator | Monday 02 June 2025 14:19:34 +0000 (0:00:00.884) 0:04:04.148 *********** 2025-06-02 14:21:54.756926 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.756935 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.756945 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.756954 | orchestrator | 2025-06-02 14:21:54.756970 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 14:21:54.756980 | orchestrator | Monday 02 June 2025 14:19:36 +0000 (0:00:01.582) 0:04:05.730 *********** 2025-06-02 14:21:54.756990 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.756999 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.757009 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.757019 | orchestrator | 2025-06-02 14:21:54.757028 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 14:21:54.757038 | orchestrator | Monday 02 June 2025 14:19:38 +0000 (0:00:02.064) 0:04:07.795 *********** 2025-06-02 14:21:54.757048 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.757057 | orchestrator | 2025-06-02 14:21:54.757067 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 14:21:54.757077 | orchestrator | Monday 02 June 2025 14:19:39 +0000 (0:00:01.601) 0:04:09.397 *********** 2025-06-02 14:21:54.757087 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 14:21:54.757096 | orchestrator | 2025-06-02 14:21:54.757106 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 14:21:54.757116 | orchestrator | Monday 02 June 2025 14:19:40 +0000 (0:00:01.153) 0:04:10.550 *********** 2025-06-02 14:21:54.757126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 14:21:54.757137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 14:21:54.757152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 14:21:54.757163 | orchestrator | 2025-06-02 14:21:54.757202 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 14:21:54.757222 | orchestrator | Monday 02 June 2025 14:19:44 +0000 (0:00:03.668) 0:04:14.219 *********** 2025-06-02 14:21:54.757239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757256 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.757274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.757312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.757332 | orchestrator | 2025-06-02 14:21:54.757342 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 14:21:54.757351 | orchestrator | Monday 02 June 2025 14:19:45 +0000 (0:00:01.345) 0:04:15.564 *********** 2025-06-02 14:21:54.757361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.757391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757411 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.757420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 14:21:54.757440 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.757450 | orchestrator | 2025-06-02 14:21:54.757460 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 14:21:54.757474 | orchestrator | Monday 02 June 2025 14:19:47 +0000 (0:00:01.980) 0:04:17.545 *********** 2025-06-02 14:21:54.757484 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.757493 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.757503 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.757512 | orchestrator | 2025-06-02 14:21:54.757522 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 14:21:54.757532 | orchestrator | Monday 02 June 2025 14:19:50 +0000 (0:00:02.444) 0:04:19.989 *********** 2025-06-02 14:21:54.757541 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.757551 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.757561 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.757570 | orchestrator | 2025-06-02 14:21:54.757602 | orchestrator | TASK [nova-cell : Confi2025-06-02 14:21:54 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:21:54.757614 | orchestrator | 2025-06-02 14:21:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:21:54.757624 | orchestrator | gure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 14:21:54.757650 | orchestrator | Monday 02 June 2025 14:19:53 +0000 (0:00:03.042) 0:04:23.032 *********** 2025-06-02 14:21:54.757661 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 14:21:54.757670 | orchestrator | 2025-06-02 14:21:54.757680 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 14:21:54.757690 | orchestrator | Monday 02 June 2025 14:19:54 +0000 (0:00:00.855) 0:04:23.887 *********** 2025-06-02 14:21:54.757700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757710 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.757720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757730 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.757740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.757760 | orchestrator | 2025-06-02 14:21:54.757769 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 14:21:54.757779 | orchestrator | Monday 02 June 2025 14:19:55 +0000 (0:00:01.318) 0:04:25.205 *********** 2025-06-02 14:21:54.757789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.757814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757832 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.757858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 14:21:54.757869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.757878 | orchestrator | 2025-06-02 14:21:54.757888 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 14:21:54.757898 | orchestrator | Monday 02 June 2025 14:19:57 +0000 (0:00:01.666) 0:04:26.872 *********** 2025-06-02 14:21:54.757908 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.757917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.757927 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.757936 | orchestrator | 2025-06-02 14:21:54.757946 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 14:21:54.757956 | orchestrator | Monday 02 June 2025 14:19:58 +0000 (0:00:01.199) 0:04:28.072 *********** 2025-06-02 14:21:54.757965 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.757975 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.757985 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.757994 | orchestrator | 2025-06-02 14:21:54.758004 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 14:21:54.758042 | orchestrator | Monday 02 June 2025 14:20:00 +0000 (0:00:02.417) 0:04:30.490 *********** 2025-06-02 14:21:54.758054 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.758064 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.758073 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.758083 | orchestrator | 2025-06-02 14:21:54.758093 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 14:21:54.758103 | orchestrator | Monday 02 June 2025 14:20:03 +0000 (0:00:02.973) 0:04:33.463 *********** 2025-06-02 14:21:54.758113 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 14:21:54.758122 | orchestrator | 2025-06-02 14:21:54.758132 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 14:21:54.758142 | orchestrator | Monday 02 June 2025 14:20:04 +0000 (0:00:01.082) 0:04:34.545 *********** 2025-06-02 14:21:54.758152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758162 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.758172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758189 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.758199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758209 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.758219 | orchestrator | 2025-06-02 14:21:54.758228 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 14:21:54.758238 | orchestrator | Monday 02 June 2025 14:20:06 +0000 (0:00:01.033) 0:04:35.579 *********** 2025-06-02 14:21:54.758253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.758290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.758311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 14:21:54.758321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.758331 | orchestrator | 2025-06-02 14:21:54.758341 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 14:21:54.758351 | orchestrator | Monday 02 June 2025 14:20:07 +0000 (0:00:01.273) 0:04:36.852 *********** 2025-06-02 14:21:54.758361 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.758370 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.758380 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.758389 | orchestrator | 2025-06-02 14:21:54.758399 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 14:21:54.758409 | orchestrator | Monday 02 June 2025 14:20:09 +0000 (0:00:01.828) 0:04:38.681 *********** 2025-06-02 14:21:54.758418 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.758428 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.758437 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.758447 | orchestrator | 2025-06-02 14:21:54.758456 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 14:21:54.758466 | orchestrator | Monday 02 June 2025 14:20:11 +0000 (0:00:02.373) 0:04:41.054 *********** 2025-06-02 14:21:54.758475 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.758491 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.758500 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.758510 | orchestrator | 2025-06-02 14:21:54.758519 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 14:21:54.758529 | orchestrator | Monday 02 June 2025 14:20:14 +0000 (0:00:03.117) 0:04:44.172 *********** 2025-06-02 14:21:54.758539 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.758548 | orchestrator | 2025-06-02 14:21:54.758558 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 14:21:54.758567 | orchestrator | Monday 02 June 2025 14:20:15 +0000 (0:00:01.332) 0:04:45.504 *********** 2025-06-02 14:21:54.758578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.758593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.758619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.758674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.758685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.758695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.758746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.758767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.758777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.758812 | orchestrator | 2025-06-02 14:21:54.758821 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 14:21:54.758831 | orchestrator | Monday 02 June 2025 14:20:20 +0000 (0:00:04.113) 0:04:49.618 *********** 2025-06-02 14:21:54.758857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.758869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.758884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.758915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.758945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.758957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.758967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.758993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.759003 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.759013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.759027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 14:21:54.759053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.759064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 14:21:54.759079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:21:54.759089 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.759099 | orchestrator | 2025-06-02 14:21:54.759109 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 14:21:54.759119 | orchestrator | Monday 02 June 2025 14:20:20 +0000 (0:00:00.696) 0:04:50.315 *********** 2025-06-02 14:21:54.759129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759149 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.759159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759178 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.759188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 14:21:54.759208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.759217 | orchestrator | 2025-06-02 14:21:54.759227 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 14:21:54.759237 | orchestrator | Monday 02 June 2025 14:20:21 +0000 (0:00:00.905) 0:04:51.220 *********** 2025-06-02 14:21:54.759246 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.759256 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.759265 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.759275 | orchestrator | 2025-06-02 14:21:54.759284 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 14:21:54.759294 | orchestrator | Monday 02 June 2025 14:20:23 +0000 (0:00:01.873) 0:04:53.094 *********** 2025-06-02 14:21:54.759308 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.759318 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.759327 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.759337 | orchestrator | 2025-06-02 14:21:54.759346 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 14:21:54.759361 | orchestrator | Monday 02 June 2025 14:20:25 +0000 (0:00:02.163) 0:04:55.258 *********** 2025-06-02 14:21:54.759371 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.759381 | orchestrator | 2025-06-02 14:21:54.759390 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 14:21:54.759421 | orchestrator | Monday 02 June 2025 14:20:27 +0000 (0:00:01.337) 0:04:56.595 *********** 2025-06-02 14:21:54.759439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:21:54.759457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:21:54.759474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:21:54.759492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:21:54.759538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:21:54.759551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:21:54.759562 | orchestrator | 2025-06-02 14:21:54.759572 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 14:21:54.759582 | orchestrator | Monday 02 June 2025 14:20:32 +0000 (0:00:05.791) 0:05:02.387 *********** 2025-06-02 14:21:54.759592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:21:54.759606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:21:54.759623 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.759683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:21:54.759703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:21:54.759718 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.759729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:21:54.759739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:21:54.759756 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.759766 | orchestrator | 2025-06-02 14:21:54.759776 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 14:21:54.759786 | orchestrator | Monday 02 June 2025 14:20:33 +0000 (0:00:01.060) 0:05:03.447 *********** 2025-06-02 14:21:54.759800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 14:21:54.759810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759854 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.759864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 14:21:54.759874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759894 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.759904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 14:21:54.759914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 14:21:54.759934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.759944 | orchestrator | 2025-06-02 14:21:54.759954 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 14:21:54.759964 | orchestrator | Monday 02 June 2025 14:20:34 +0000 (0:00:00.970) 0:05:04.417 *********** 2025-06-02 14:21:54.759973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.759983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.759993 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.760002 | orchestrator | 2025-06-02 14:21:54.760012 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 14:21:54.760022 | orchestrator | Monday 02 June 2025 14:20:35 +0000 (0:00:00.475) 0:05:04.892 *********** 2025-06-02 14:21:54.760031 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.760041 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.760050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.760060 | orchestrator | 2025-06-02 14:21:54.760070 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 14:21:54.760089 | orchestrator | Monday 02 June 2025 14:20:36 +0000 (0:00:01.430) 0:05:06.323 *********** 2025-06-02 14:21:54.760098 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.760108 | orchestrator | 2025-06-02 14:21:54.760118 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 14:21:54.760127 | orchestrator | Monday 02 June 2025 14:20:38 +0000 (0:00:01.712) 0:05:08.036 *********** 2025-06-02 14:21:54.760137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:21:54.760152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:21:54.760226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:21:54.760277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:21:54.760367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:21:54.760429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:21:54.760473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760544 | orchestrator | 2025-06-02 14:21:54.760554 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 14:21:54.760564 | orchestrator | Monday 02 June 2025 14:20:42 +0000 (0:00:04.133) 0:05:12.169 *********** 2025-06-02 14:21:54.760574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 14:21:54.760590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 14:21:54.760706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.760768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 14:21:54.760784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 14:21:54.760842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760904 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.760912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 14:21:54.760921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:21:54.760929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.760963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 14:21:54.760977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 14:21:54.760985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.760993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:21:54.761001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:21:54.761009 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761017 | orchestrator | 2025-06-02 14:21:54.761025 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 14:21:54.761033 | orchestrator | Monday 02 June 2025 14:20:43 +0000 (0:00:01.240) 0:05:13.410 *********** 2025-06-02 14:21:54.761042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761090 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761140 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 14:21:54.761159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 14:21:54.761176 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761184 | orchestrator | 2025-06-02 14:21:54.761192 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 14:21:54.761200 | orchestrator | Monday 02 June 2025 14:20:44 +0000 (0:00:01.078) 0:05:14.488 *********** 2025-06-02 14:21:54.761208 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761231 | orchestrator | 2025-06-02 14:21:54.761239 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 14:21:54.761247 | orchestrator | Monday 02 June 2025 14:20:45 +0000 (0:00:00.453) 0:05:14.942 *********** 2025-06-02 14:21:54.761254 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761262 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761270 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761278 | orchestrator | 2025-06-02 14:21:54.761286 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 14:21:54.761294 | orchestrator | Monday 02 June 2025 14:20:47 +0000 (0:00:01.726) 0:05:16.668 *********** 2025-06-02 14:21:54.761302 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.761309 | orchestrator | 2025-06-02 14:21:54.761317 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 14:21:54.761330 | orchestrator | Monday 02 June 2025 14:20:48 +0000 (0:00:01.750) 0:05:18.418 *********** 2025-06-02 14:21:54.761346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:21:54.761356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:21:54.761366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 14:21:54.761375 | orchestrator | 2025-06-02 14:21:54.761383 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 14:21:54.761391 | orchestrator | Monday 02 June 2025 14:20:51 +0000 (0:00:02.660) 0:05:21.079 *********** 2025-06-02 14:21:54.761399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 14:21:54.761421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 14:21:54.761431 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761439 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 14:21:54.761456 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761464 | orchestrator | 2025-06-02 14:21:54.761472 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 14:21:54.761480 | orchestrator | Monday 02 June 2025 14:20:51 +0000 (0:00:00.410) 0:05:21.489 *********** 2025-06-02 14:21:54.761488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 14:21:54.761496 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 14:21:54.761512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 14:21:54.761528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761536 | orchestrator | 2025-06-02 14:21:54.761544 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 14:21:54.761552 | orchestrator | Monday 02 June 2025 14:20:52 +0000 (0:00:01.085) 0:05:22.575 *********** 2025-06-02 14:21:54.761560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761576 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761584 | orchestrator | 2025-06-02 14:21:54.761592 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 14:21:54.761605 | orchestrator | Monday 02 June 2025 14:20:53 +0000 (0:00:00.474) 0:05:23.049 *********** 2025-06-02 14:21:54.761613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761621 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761658 | orchestrator | 2025-06-02 14:21:54.761666 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 14:21:54.761673 | orchestrator | Monday 02 June 2025 14:20:54 +0000 (0:00:01.379) 0:05:24.429 *********** 2025-06-02 14:21:54.761681 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:21:54.761689 | orchestrator | 2025-06-02 14:21:54.761697 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 14:21:54.761704 | orchestrator | Monday 02 June 2025 14:20:56 +0000 (0:00:01.787) 0:05:26.217 *********** 2025-06-02 14:21:54.761716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 14:21:54.761788 | orchestrator | 2025-06-02 14:21:54.761796 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 14:21:54.761804 | orchestrator | Monday 02 June 2025 14:21:02 +0000 (0:00:06.255) 0:05:32.472 *********** 2025-06-02 14:21:54.761813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761881 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.761889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 14:21:54.761911 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.761919 | orchestrator | 2025-06-02 14:21:54.761927 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 14:21:54.761935 | orchestrator | Monday 02 June 2025 14:21:03 +0000 (0:00:00.689) 0:05:33.161 *********** 2025-06-02 14:21:54.761943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.761951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.761959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.761967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.761975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.761983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.761995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762080 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 14:21:54.762120 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762134 | orchestrator | 2025-06-02 14:21:54.762142 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 14:21:54.762150 | orchestrator | Monday 02 June 2025 14:21:05 +0000 (0:00:01.917) 0:05:35.079 *********** 2025-06-02 14:21:54.762158 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.762166 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.762174 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.762182 | orchestrator | 2025-06-02 14:21:54.762189 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 14:21:54.762197 | orchestrator | Monday 02 June 2025 14:21:06 +0000 (0:00:01.307) 0:05:36.387 *********** 2025-06-02 14:21:54.762205 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.762213 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.762221 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.762229 | orchestrator | 2025-06-02 14:21:54.762237 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 14:21:54.762245 | orchestrator | Monday 02 June 2025 14:21:08 +0000 (0:00:02.178) 0:05:38.565 *********** 2025-06-02 14:21:54.762252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762260 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762268 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762276 | orchestrator | 2025-06-02 14:21:54.762283 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 14:21:54.762291 | orchestrator | Monday 02 June 2025 14:21:09 +0000 (0:00:00.311) 0:05:38.877 *********** 2025-06-02 14:21:54.762299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762307 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762315 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762323 | orchestrator | 2025-06-02 14:21:54.762331 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 14:21:54.762338 | orchestrator | Monday 02 June 2025 14:21:09 +0000 (0:00:00.627) 0:05:39.504 *********** 2025-06-02 14:21:54.762346 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762354 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762362 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762369 | orchestrator | 2025-06-02 14:21:54.762377 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 14:21:54.762385 | orchestrator | Monday 02 June 2025 14:21:10 +0000 (0:00:00.337) 0:05:39.841 *********** 2025-06-02 14:21:54.762393 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762401 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762416 | orchestrator | 2025-06-02 14:21:54.762424 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 14:21:54.762432 | orchestrator | Monday 02 June 2025 14:21:10 +0000 (0:00:00.358) 0:05:40.200 *********** 2025-06-02 14:21:54.762440 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762455 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762463 | orchestrator | 2025-06-02 14:21:54.762471 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 14:21:54.762479 | orchestrator | Monday 02 June 2025 14:21:10 +0000 (0:00:00.312) 0:05:40.513 *********** 2025-06-02 14:21:54.762487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.762495 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.762502 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.762510 | orchestrator | 2025-06-02 14:21:54.762518 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 14:21:54.762526 | orchestrator | Monday 02 June 2025 14:21:11 +0000 (0:00:00.889) 0:05:41.402 *********** 2025-06-02 14:21:54.762534 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762542 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762550 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762558 | orchestrator | 2025-06-02 14:21:54.762574 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 14:21:54.762582 | orchestrator | Monday 02 June 2025 14:21:12 +0000 (0:00:00.696) 0:05:42.098 *********** 2025-06-02 14:21:54.762590 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762598 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762606 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762613 | orchestrator | 2025-06-02 14:21:54.762621 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 14:21:54.762683 | orchestrator | Monday 02 June 2025 14:21:12 +0000 (0:00:00.370) 0:05:42.469 *********** 2025-06-02 14:21:54.762698 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762712 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762721 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762729 | orchestrator | 2025-06-02 14:21:54.762742 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 14:21:54.762750 | orchestrator | Monday 02 June 2025 14:21:14 +0000 (0:00:01.338) 0:05:43.808 *********** 2025-06-02 14:21:54.762758 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762766 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762774 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762782 | orchestrator | 2025-06-02 14:21:54.762789 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 14:21:54.762797 | orchestrator | Monday 02 June 2025 14:21:15 +0000 (0:00:00.880) 0:05:44.688 *********** 2025-06-02 14:21:54.762805 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762813 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762820 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762828 | orchestrator | 2025-06-02 14:21:54.762836 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 14:21:54.762844 | orchestrator | Monday 02 June 2025 14:21:16 +0000 (0:00:00.941) 0:05:45.630 *********** 2025-06-02 14:21:54.762852 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.762859 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.762865 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.762872 | orchestrator | 2025-06-02 14:21:54.762879 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 14:21:54.762885 | orchestrator | Monday 02 June 2025 14:21:20 +0000 (0:00:04.661) 0:05:50.292 *********** 2025-06-02 14:21:54.762892 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762898 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762905 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762911 | orchestrator | 2025-06-02 14:21:54.762918 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 14:21:54.762925 | orchestrator | Monday 02 June 2025 14:21:24 +0000 (0:00:03.690) 0:05:53.983 *********** 2025-06-02 14:21:54.762931 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.762938 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.762945 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.762951 | orchestrator | 2025-06-02 14:21:54.762958 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 14:21:54.762964 | orchestrator | Monday 02 June 2025 14:21:38 +0000 (0:00:14.226) 0:06:08.209 *********** 2025-06-02 14:21:54.762971 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.762977 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.762984 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.762990 | orchestrator | 2025-06-02 14:21:54.762997 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 14:21:54.763004 | orchestrator | Monday 02 June 2025 14:21:39 +0000 (0:00:00.746) 0:06:08.955 *********** 2025-06-02 14:21:54.763010 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:21:54.763017 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:21:54.763023 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:21:54.763030 | orchestrator | 2025-06-02 14:21:54.763037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 14:21:54.763043 | orchestrator | Monday 02 June 2025 14:21:43 +0000 (0:00:04.326) 0:06:13.282 *********** 2025-06-02 14:21:54.763055 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763062 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763068 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763075 | orchestrator | 2025-06-02 14:21:54.763081 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 14:21:54.763088 | orchestrator | Monday 02 June 2025 14:21:44 +0000 (0:00:00.348) 0:06:13.630 *********** 2025-06-02 14:21:54.763095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763101 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763108 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763114 | orchestrator | 2025-06-02 14:21:54.763121 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 14:21:54.763128 | orchestrator | Monday 02 June 2025 14:21:44 +0000 (0:00:00.791) 0:06:14.421 *********** 2025-06-02 14:21:54.763134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763147 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763154 | orchestrator | 2025-06-02 14:21:54.763161 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 14:21:54.763168 | orchestrator | Monday 02 June 2025 14:21:45 +0000 (0:00:00.373) 0:06:14.795 *********** 2025-06-02 14:21:54.763174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763181 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763187 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763194 | orchestrator | 2025-06-02 14:21:54.763200 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 14:21:54.763207 | orchestrator | Monday 02 June 2025 14:21:45 +0000 (0:00:00.384) 0:06:15.179 *********** 2025-06-02 14:21:54.763214 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763220 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763233 | orchestrator | 2025-06-02 14:21:54.763240 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 14:21:54.763247 | orchestrator | Monday 02 June 2025 14:21:45 +0000 (0:00:00.383) 0:06:15.563 *********** 2025-06-02 14:21:54.763253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:21:54.763260 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:21:54.763266 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:21:54.763273 | orchestrator | 2025-06-02 14:21:54.763280 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 14:21:54.763290 | orchestrator | Monday 02 June 2025 14:21:46 +0000 (0:00:00.754) 0:06:16.318 *********** 2025-06-02 14:21:54.763297 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.763304 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.763310 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.763317 | orchestrator | 2025-06-02 14:21:54.763323 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 14:21:54.763330 | orchestrator | Monday 02 June 2025 14:21:51 +0000 (0:00:04.799) 0:06:21.118 *********** 2025-06-02 14:21:54.763337 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:21:54.763343 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:21:54.763350 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:21:54.763356 | orchestrator | 2025-06-02 14:21:54.763366 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:21:54.763373 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 14:21:54.763381 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 14:21:54.763387 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 14:21:54.763398 | orchestrator | 2025-06-02 14:21:54.763405 | orchestrator | 2025-06-02 14:21:54.763412 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:21:54.763418 | orchestrator | Monday 02 June 2025 14:21:52 +0000 (0:00:00.789) 0:06:21.907 *********** 2025-06-02 14:21:54.763425 | orchestrator | =============================================================================== 2025-06-02 14:21:54.763432 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.23s 2025-06-02 14:21:54.763438 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.26s 2025-06-02 14:21:54.763445 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.79s 2025-06-02 14:21:54.763452 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.32s 2025-06-02 14:21:54.763458 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.31s 2025-06-02 14:21:54.763465 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.24s 2025-06-02 14:21:54.763471 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.14s 2025-06-02 14:21:54.763478 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.80s 2025-06-02 14:21:54.763485 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.67s 2025-06-02 14:21:54.763491 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.66s 2025-06-02 14:21:54.763498 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.33s 2025-06-02 14:21:54.763505 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.32s 2025-06-02 14:21:54.763511 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.31s 2025-06-02 14:21:54.763518 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.19s 2025-06-02 14:21:54.763524 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.13s 2025-06-02 14:21:54.763531 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.11s 2025-06-02 14:21:54.763538 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.07s 2025-06-02 14:21:54.763544 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.99s 2025-06-02 14:21:54.763551 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.89s 2025-06-02 14:21:54.763558 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.83s 2025-06-02 14:21:57.791300 | orchestrator | 2025-06-02 14:21:57 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:21:57.792487 | orchestrator | 2025-06-02 14:21:57 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:21:57.794524 | orchestrator | 2025-06-02 14:21:57 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:21:57.794558 | orchestrator | 2025-06-02 14:21:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:00.825595 | orchestrator | 2025-06-02 14:22:00 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:00.825722 | orchestrator | 2025-06-02 14:22:00 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:00.825738 | orchestrator | 2025-06-02 14:22:00 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:00.825750 | orchestrator | 2025-06-02 14:22:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:03.866965 | orchestrator | 2025-06-02 14:22:03 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:03.867421 | orchestrator | 2025-06-02 14:22:03 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:03.868424 | orchestrator | 2025-06-02 14:22:03 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:03.868485 | orchestrator | 2025-06-02 14:22:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:06.901602 | orchestrator | 2025-06-02 14:22:06 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:06.903230 | orchestrator | 2025-06-02 14:22:06 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:06.904794 | orchestrator | 2025-06-02 14:22:06 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:06.905085 | orchestrator | 2025-06-02 14:22:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:09.942184 | orchestrator | 2025-06-02 14:22:09 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:09.943018 | orchestrator | 2025-06-02 14:22:09 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:09.946350 | orchestrator | 2025-06-02 14:22:09 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:09.946397 | orchestrator | 2025-06-02 14:22:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:13.006986 | orchestrator | 2025-06-02 14:22:13 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:13.007475 | orchestrator | 2025-06-02 14:22:13 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:13.009349 | orchestrator | 2025-06-02 14:22:13 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:13.009378 | orchestrator | 2025-06-02 14:22:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:16.057575 | orchestrator | 2025-06-02 14:22:16 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:16.061151 | orchestrator | 2025-06-02 14:22:16 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:16.063896 | orchestrator | 2025-06-02 14:22:16 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:16.063973 | orchestrator | 2025-06-02 14:22:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:19.108432 | orchestrator | 2025-06-02 14:22:19 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:19.108508 | orchestrator | 2025-06-02 14:22:19 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:19.109919 | orchestrator | 2025-06-02 14:22:19 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:19.109938 | orchestrator | 2025-06-02 14:22:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:22.149830 | orchestrator | 2025-06-02 14:22:22 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:22.149918 | orchestrator | 2025-06-02 14:22:22 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:22.150267 | orchestrator | 2025-06-02 14:22:22 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:22.150291 | orchestrator | 2025-06-02 14:22:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:25.235859 | orchestrator | 2025-06-02 14:22:25 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:25.236312 | orchestrator | 2025-06-02 14:22:25 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:25.237780 | orchestrator | 2025-06-02 14:22:25 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:25.237806 | orchestrator | 2025-06-02 14:22:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:28.283594 | orchestrator | 2025-06-02 14:22:28 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:28.286210 | orchestrator | 2025-06-02 14:22:28 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:28.287210 | orchestrator | 2025-06-02 14:22:28 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:28.287251 | orchestrator | 2025-06-02 14:22:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:31.338521 | orchestrator | 2025-06-02 14:22:31 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:31.340071 | orchestrator | 2025-06-02 14:22:31 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:31.342911 | orchestrator | 2025-06-02 14:22:31 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:31.342956 | orchestrator | 2025-06-02 14:22:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:34.392179 | orchestrator | 2025-06-02 14:22:34 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:34.394237 | orchestrator | 2025-06-02 14:22:34 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:34.396139 | orchestrator | 2025-06-02 14:22:34 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:34.396179 | orchestrator | 2025-06-02 14:22:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:37.436671 | orchestrator | 2025-06-02 14:22:37 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:37.438547 | orchestrator | 2025-06-02 14:22:37 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:37.440247 | orchestrator | 2025-06-02 14:22:37 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:37.440293 | orchestrator | 2025-06-02 14:22:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:40.490702 | orchestrator | 2025-06-02 14:22:40 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:40.491392 | orchestrator | 2025-06-02 14:22:40 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:40.492684 | orchestrator | 2025-06-02 14:22:40 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:40.493008 | orchestrator | 2025-06-02 14:22:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:43.534822 | orchestrator | 2025-06-02 14:22:43 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:43.536902 | orchestrator | 2025-06-02 14:22:43 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:43.539502 | orchestrator | 2025-06-02 14:22:43 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:43.539808 | orchestrator | 2025-06-02 14:22:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:46.588380 | orchestrator | 2025-06-02 14:22:46 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:46.588500 | orchestrator | 2025-06-02 14:22:46 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:46.588515 | orchestrator | 2025-06-02 14:22:46 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:46.589057 | orchestrator | 2025-06-02 14:22:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:49.638920 | orchestrator | 2025-06-02 14:22:49 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:49.640543 | orchestrator | 2025-06-02 14:22:49 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:49.643585 | orchestrator | 2025-06-02 14:22:49 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:49.643653 | orchestrator | 2025-06-02 14:22:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:52.693653 | orchestrator | 2025-06-02 14:22:52 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:52.695910 | orchestrator | 2025-06-02 14:22:52 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:52.698565 | orchestrator | 2025-06-02 14:22:52 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:52.698850 | orchestrator | 2025-06-02 14:22:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:55.744399 | orchestrator | 2025-06-02 14:22:55 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:55.746213 | orchestrator | 2025-06-02 14:22:55 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:55.747963 | orchestrator | 2025-06-02 14:22:55 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:55.748266 | orchestrator | 2025-06-02 14:22:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:22:58.809337 | orchestrator | 2025-06-02 14:22:58 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:22:58.811306 | orchestrator | 2025-06-02 14:22:58 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:22:58.812490 | orchestrator | 2025-06-02 14:22:58 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:22:58.812780 | orchestrator | 2025-06-02 14:22:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:01.856838 | orchestrator | 2025-06-02 14:23:01 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:01.857416 | orchestrator | 2025-06-02 14:23:01 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:01.858786 | orchestrator | 2025-06-02 14:23:01 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:01.858812 | orchestrator | 2025-06-02 14:23:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:04.900159 | orchestrator | 2025-06-02 14:23:04 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:04.901044 | orchestrator | 2025-06-02 14:23:04 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:04.902697 | orchestrator | 2025-06-02 14:23:04 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:04.902725 | orchestrator | 2025-06-02 14:23:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:07.960878 | orchestrator | 2025-06-02 14:23:07 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:07.962586 | orchestrator | 2025-06-02 14:23:07 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:07.965476 | orchestrator | 2025-06-02 14:23:07 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:07.965492 | orchestrator | 2025-06-02 14:23:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:11.019213 | orchestrator | 2025-06-02 14:23:11 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:11.022675 | orchestrator | 2025-06-02 14:23:11 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:11.024418 | orchestrator | 2025-06-02 14:23:11 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:11.025064 | orchestrator | 2025-06-02 14:23:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:14.076650 | orchestrator | 2025-06-02 14:23:14 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:14.078165 | orchestrator | 2025-06-02 14:23:14 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:14.079909 | orchestrator | 2025-06-02 14:23:14 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:14.080045 | orchestrator | 2025-06-02 14:23:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:17.131061 | orchestrator | 2025-06-02 14:23:17 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:17.132900 | orchestrator | 2025-06-02 14:23:17 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:17.135847 | orchestrator | 2025-06-02 14:23:17 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:17.136537 | orchestrator | 2025-06-02 14:23:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:20.197209 | orchestrator | 2025-06-02 14:23:20 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:20.197337 | orchestrator | 2025-06-02 14:23:20 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:20.198281 | orchestrator | 2025-06-02 14:23:20 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:20.198310 | orchestrator | 2025-06-02 14:23:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:23.241922 | orchestrator | 2025-06-02 14:23:23 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:23.242229 | orchestrator | 2025-06-02 14:23:23 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:23.243868 | orchestrator | 2025-06-02 14:23:23 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:23.244092 | orchestrator | 2025-06-02 14:23:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:26.289504 | orchestrator | 2025-06-02 14:23:26 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:26.289935 | orchestrator | 2025-06-02 14:23:26 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:26.294626 | orchestrator | 2025-06-02 14:23:26 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:26.294699 | orchestrator | 2025-06-02 14:23:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:29.345980 | orchestrator | 2025-06-02 14:23:29 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:29.346427 | orchestrator | 2025-06-02 14:23:29 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:29.347930 | orchestrator | 2025-06-02 14:23:29 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:29.347966 | orchestrator | 2025-06-02 14:23:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:32.392097 | orchestrator | 2025-06-02 14:23:32 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:32.393878 | orchestrator | 2025-06-02 14:23:32 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:32.395995 | orchestrator | 2025-06-02 14:23:32 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:32.396026 | orchestrator | 2025-06-02 14:23:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:35.446165 | orchestrator | 2025-06-02 14:23:35 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state STARTED 2025-06-02 14:23:35.447982 | orchestrator | 2025-06-02 14:23:35 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:35.449481 | orchestrator | 2025-06-02 14:23:35 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:35.449507 | orchestrator | 2025-06-02 14:23:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:38.512744 | orchestrator | 2025-06-02 14:23:38 | INFO  | Task eddbd8f1-646c-4866-ba66-a74ee3dd19d0 is in state SUCCESS 2025-06-02 14:23:38.514391 | orchestrator | 2025-06-02 14:23:38.514436 | orchestrator | 2025-06-02 14:23:38.514449 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 14:23:38.514462 | orchestrator | 2025-06-02 14:23:38.514473 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 14:23:38.514484 | orchestrator | Monday 02 June 2025 14:12:49 +0000 (0:00:00.620) 0:00:00.620 *********** 2025-06-02 14:23:38.514605 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.514705 | orchestrator | 2025-06-02 14:23:38.514723 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 14:23:38.514735 | orchestrator | Monday 02 June 2025 14:12:50 +0000 (0:00:01.186) 0:00:01.807 *********** 2025-06-02 14:23:38.514747 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.514759 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.514770 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.514781 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.514792 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.514803 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.514814 | orchestrator | 2025-06-02 14:23:38.514825 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 14:23:38.514836 | orchestrator | Monday 02 June 2025 14:12:52 +0000 (0:00:01.500) 0:00:03.307 *********** 2025-06-02 14:23:38.514847 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.514858 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.514869 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.514879 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.514892 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.514910 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.514930 | orchestrator | 2025-06-02 14:23:38.514950 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 14:23:38.514962 | orchestrator | Monday 02 June 2025 14:12:53 +0000 (0:00:00.932) 0:00:04.239 *********** 2025-06-02 14:23:38.514975 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.514988 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.515000 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.515149 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.515163 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.515175 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.515189 | orchestrator | 2025-06-02 14:23:38.515202 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 14:23:38.515214 | orchestrator | Monday 02 June 2025 14:12:54 +0000 (0:00:00.934) 0:00:05.173 *********** 2025-06-02 14:23:38.515227 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.515239 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.515252 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.515264 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.515276 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.515342 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.515357 | orchestrator | 2025-06-02 14:23:38.515370 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 14:23:38.515381 | orchestrator | Monday 02 June 2025 14:12:54 +0000 (0:00:00.661) 0:00:05.835 *********** 2025-06-02 14:23:38.515391 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.515402 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.515413 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.515489 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.515500 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.515510 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.515521 | orchestrator | 2025-06-02 14:23:38.515558 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 14:23:38.515570 | orchestrator | Monday 02 June 2025 14:12:55 +0000 (0:00:00.657) 0:00:06.492 *********** 2025-06-02 14:23:38.515581 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.515592 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.515604 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.515614 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.515625 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.515714 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.515731 | orchestrator | 2025-06-02 14:23:38.515742 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 14:23:38.515778 | orchestrator | Monday 02 June 2025 14:12:56 +0000 (0:00:00.917) 0:00:07.410 *********** 2025-06-02 14:23:38.515790 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.515802 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.515813 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.515957 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.515969 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.515980 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.515991 | orchestrator | 2025-06-02 14:23:38.516002 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 14:23:38.516012 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:00.783) 0:00:08.193 *********** 2025-06-02 14:23:38.516023 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.516034 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.516115 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.516127 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.516138 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.516148 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.516159 | orchestrator | 2025-06-02 14:23:38.516170 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 14:23:38.516181 | orchestrator | Monday 02 June 2025 14:12:57 +0000 (0:00:00.868) 0:00:09.062 *********** 2025-06-02 14:23:38.516192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.516204 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.516215 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.516226 | orchestrator | 2025-06-02 14:23:38.516237 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 14:23:38.516248 | orchestrator | Monday 02 June 2025 14:12:58 +0000 (0:00:00.772) 0:00:09.835 *********** 2025-06-02 14:23:38.516259 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.516270 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.516281 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.516292 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.516302 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.516343 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.516355 | orchestrator | 2025-06-02 14:23:38.516382 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 14:23:38.516394 | orchestrator | Monday 02 June 2025 14:13:00 +0000 (0:00:01.288) 0:00:11.124 *********** 2025-06-02 14:23:38.516405 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.516606 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.516619 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.516630 | orchestrator | 2025-06-02 14:23:38.516713 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 14:23:38.516737 | orchestrator | Monday 02 June 2025 14:13:02 +0000 (0:00:02.642) 0:00:13.767 *********** 2025-06-02 14:23:38.516754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.516806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.516817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.516828 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.516839 | orchestrator | 2025-06-02 14:23:38.516850 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 14:23:38.516861 | orchestrator | Monday 02 June 2025 14:13:03 +0000 (0:00:01.141) 0:00:14.908 *********** 2025-06-02 14:23:38.516874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.516889 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.516900 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.516912 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517072 | orchestrator | 2025-06-02 14:23:38.517086 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 14:23:38.517097 | orchestrator | Monday 02 June 2025 14:13:04 +0000 (0:00:01.137) 0:00:16.046 *********** 2025-06-02 14:23:38.517110 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517132 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517144 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517166 | orchestrator | 2025-06-02 14:23:38.517177 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 14:23:38.517188 | orchestrator | Monday 02 June 2025 14:13:05 +0000 (0:00:00.375) 0:00:16.421 *********** 2025-06-02 14:23:38.517201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 14:13:00.692690', 'end': '2025-06-02 14:13:00.954927', 'delta': '0:00:00.262237', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517237 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 14:13:01.584862', 'end': '2025-06-02 14:13:01.842312', 'delta': '0:00:00.257450', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517250 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 14:13:02.267520', 'end': '2025-06-02 14:13:02.539077', 'delta': '0:00:00.271557', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.517262 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517273 | orchestrator | 2025-06-02 14:23:38.517284 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 14:23:38.517295 | orchestrator | Monday 02 June 2025 14:13:05 +0000 (0:00:00.167) 0:00:16.588 *********** 2025-06-02 14:23:38.517306 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.517317 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.517327 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.517338 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.517349 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.517360 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.517370 | orchestrator | 2025-06-02 14:23:38.517381 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 14:23:38.517433 | orchestrator | Monday 02 June 2025 14:13:07 +0000 (0:00:01.690) 0:00:18.279 *********** 2025-06-02 14:23:38.517444 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.517455 | orchestrator | 2025-06-02 14:23:38.517466 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 14:23:38.517503 | orchestrator | Monday 02 June 2025 14:13:07 +0000 (0:00:00.726) 0:00:19.005 *********** 2025-06-02 14:23:38.517514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517526 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.517536 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.517547 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.517558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.517569 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.517580 | orchestrator | 2025-06-02 14:23:38.517591 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 14:23:38.517607 | orchestrator | Monday 02 June 2025 14:13:09 +0000 (0:00:01.209) 0:00:20.215 *********** 2025-06-02 14:23:38.517618 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517636 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.517713 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.517725 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.517736 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.517746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.517757 | orchestrator | 2025-06-02 14:23:38.517768 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 14:23:38.517779 | orchestrator | Monday 02 June 2025 14:13:10 +0000 (0:00:01.779) 0:00:21.995 *********** 2025-06-02 14:23:38.517790 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.517811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.517822 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.517833 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.517843 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.517854 | orchestrator | 2025-06-02 14:23:38.517865 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 14:23:38.517876 | orchestrator | Monday 02 June 2025 14:13:12 +0000 (0:00:01.507) 0:00:23.502 *********** 2025-06-02 14:23:38.517887 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517897 | orchestrator | 2025-06-02 14:23:38.517908 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 14:23:38.517919 | orchestrator | Monday 02 June 2025 14:13:12 +0000 (0:00:00.182) 0:00:23.684 *********** 2025-06-02 14:23:38.517930 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.517941 | orchestrator | 2025-06-02 14:23:38.517952 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 14:23:38.517971 | orchestrator | Monday 02 June 2025 14:13:12 +0000 (0:00:00.212) 0:00:23.896 *********** 2025-06-02 14:23:38.517990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518009 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518075 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518087 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518097 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518108 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518119 | orchestrator | 2025-06-02 14:23:38.518130 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 14:23:38.518151 | orchestrator | Monday 02 June 2025 14:13:13 +0000 (0:00:00.668) 0:00:24.564 *********** 2025-06-02 14:23:38.518163 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518174 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518196 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518207 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518218 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518229 | orchestrator | 2025-06-02 14:23:38.518238 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 14:23:38.518248 | orchestrator | Monday 02 June 2025 14:13:14 +0000 (0:00:00.916) 0:00:25.481 *********** 2025-06-02 14:23:38.518258 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518268 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518287 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518296 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518306 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518315 | orchestrator | 2025-06-02 14:23:38.518325 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 14:23:38.518335 | orchestrator | Monday 02 June 2025 14:13:15 +0000 (0:00:00.865) 0:00:26.346 *********** 2025-06-02 14:23:38.518345 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518355 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518365 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518374 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518392 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518402 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518411 | orchestrator | 2025-06-02 14:23:38.518421 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 14:23:38.518431 | orchestrator | Monday 02 June 2025 14:13:16 +0000 (0:00:00.992) 0:00:27.339 *********** 2025-06-02 14:23:38.518441 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518505 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518516 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518526 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518536 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518545 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518555 | orchestrator | 2025-06-02 14:23:38.518564 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 14:23:38.518574 | orchestrator | Monday 02 June 2025 14:13:16 +0000 (0:00:00.597) 0:00:27.937 *********** 2025-06-02 14:23:38.518584 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518594 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518603 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518613 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518623 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518632 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518662 | orchestrator | 2025-06-02 14:23:38.518674 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 14:23:38.518683 | orchestrator | Monday 02 June 2025 14:13:17 +0000 (0:00:00.833) 0:00:28.771 *********** 2025-06-02 14:23:38.518693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.518702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.518712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.518721 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.518730 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.518740 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.518749 | orchestrator | 2025-06-02 14:23:38.518759 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 14:23:38.518768 | orchestrator | Monday 02 June 2025 14:13:18 +0000 (0:00:00.680) 0:00:29.451 *********** 2025-06-02 14:23:38.518785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part1', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part14', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part15', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part16', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.518936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.518966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.518992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.519087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.519098 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.519112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.519209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.519920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520057 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.520077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2', 'dm-uuid-LVM-PRcTXFVMD2J9y2msp1jLbP8Tnzjv1PZVW7vY9gu7hRhzOlXXC6Y4BJjIOwreghe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7', 'dm-uuid-LVM-0DHQdMENg10onuP1gilf8HJ18ewp3PYPu7xdXLMFVyJjPsrnSMt5DptLsvyQSKuq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.520213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10', 'dm-uuid-LVM-aXsMYsQIG8ipRI6F2Ecf6r6twXfyZeU7xIZbpf6RWajeJPlgDWFTHlsGQKjWz1LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d', 'dm-uuid-LVM-kHfeidgHrXTbvPvXcWUbj91hl0Z4ABGq6i0Mp9siSSBfn9jcs9Wo6Ju11kKwZRP6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HeXiKj-Y2ur-EJzQ-DSWO-DbOw-90BR-diQB6B', 'scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf', 'scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klRX02-oPol-DcMk-qROk-Spg4-9fo7-Bn1a3b', 'scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e', 'scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8', 'scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520505 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.520518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d', 'dm-uuid-LVM-ArZCk8LA2tgmTNdcy1sxqx9AkNK4pZELH7EpPioFIlc0i0NnKMTWiIR6eimZUHba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Dhy17-rLof-5atV-hb51-G5xb-ipkX-5N8jtU', 'scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7', 'scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064', 'dm-uuid-LVM-LYlgOOuwskw0FRxuwd5epNvmykOdYzYqPGwfzPfdt4v7TSbe2xrqDaw8ZlBsHExx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lujp7B-oHJI-oyfJ-cKSB-z2fw-TJNM-IYwucw', 'scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b', 'scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857', 'scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520758 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.520769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:23:38.520841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bCTR9v-UPvn-niQy-9m0V-qAJW-1Wfw-HfxNC2', 'scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb', 'scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPpjBi-Y6G1-qLzp-1TWE-7LY8-B2hS-h657E1', 'scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000', 'scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0', 'scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:23:38.520922 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.520934 | orchestrator | 2025-06-02 14:23:38.520946 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 14:23:38.520958 | orchestrator | Monday 02 June 2025 14:13:20 +0000 (0:00:02.279) 0:00:31.730 *********** 2025-06-02 14:23:38.520971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.520984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.520995 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521025 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521037 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521049 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521099 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part1', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part14', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part15', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part16', 'scsi-SQEMU_QEMU_HARDDISK_35465401-401c-49c9-ae8f-f7b96b89b216-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.521149 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521161 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521172 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521222 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521243 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521255 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521272 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5181ae0-889a-48f6-853e-904cf79da0d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521291 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521309 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521321 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521339 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.521351 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521362 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521379 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521390 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521407 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521419 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521437 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part1', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part14', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part15', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part16', 'scsi-SQEMU_QEMU_HARDDISK_5d6e9507-eb6a-4b2c-98bf-1ecae1dcdbe5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521456 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2', 'dm-uuid-LVM-PRcTXFVMD2J9y2msp1jLbP8Tnzjv1PZVW7vY9gu7hRhzOlXXC6Y4BJjIOwreghe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7', 'dm-uuid-LVM-0DHQdMENg10onuP1gilf8HJ18ewp3PYPu7xdXLMFVyJjPsrnSMt5DptLsvyQSKuq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521505 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.521517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521605 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10', 'dm-uuid-LVM-aXsMYsQIG8ipRI6F2Ecf6r6twXfyZeU7xIZbpf6RWajeJPlgDWFTHlsGQKjWz1LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d', 'dm-uuid-LVM-kHfeidgHrXTbvPvXcWUbj91hl0Z4ABGq6i0Mp9siSSBfn9jcs9Wo6Ju11kKwZRP6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521720 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HeXiKj-Y2ur-EJzQ-DSWO-DbOw-90BR-diQB6B', 'scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf', 'scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klRX02-oPol-DcMk-qROk-Spg4-9fo7-Bn1a3b', 'scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e', 'scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.521986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8', 'scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.522190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Dhy17-rLof-5atV-hb51-G5xb-ipkX-5N8jtU', 'scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7', 'scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d', 'dm-uuid-LVM-ArZCk8LA2tgmTNdcy1sxqx9AkNK4pZELH7EpPioFIlc0i0NnKMTWiIR6eimZUHba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lujp7B-oHJI-oyfJ-cKSB-z2fw-TJNM-IYwucw', 'scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b', 'scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857', 'scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522292 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522312 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.522323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064', 'dm-uuid-LVM-LYlgOOuwskw0FRxuwd5epNvmykOdYzYqPGwfzPfdt4v7TSbe2xrqDaw8ZlBsHExx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522352 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522364 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bCTR9v-UPvn-niQy-9m0V-qAJW-1Wfw-HfxNC2', 'scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb', 'scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPpjBi-Y6G1-qLzp-1TWE-7LY8-B2hS-h657E1', 'scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000', 'scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0', 'scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:23:38.522542 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.522555 | orchestrator | 2025-06-02 14:23:38.522569 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 14:23:38.522582 | orchestrator | Monday 02 June 2025 14:13:23 +0000 (0:00:02.681) 0:00:34.412 *********** 2025-06-02 14:23:38.522595 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.522608 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.522621 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.522657 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.522671 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.522684 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.522696 | orchestrator | 2025-06-02 14:23:38.522708 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 14:23:38.522721 | orchestrator | Monday 02 June 2025 14:13:24 +0000 (0:00:01.356) 0:00:35.769 *********** 2025-06-02 14:23:38.522733 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.522746 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.522758 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.522770 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.522783 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.522795 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.522807 | orchestrator | 2025-06-02 14:23:38.522819 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 14:23:38.522831 | orchestrator | Monday 02 June 2025 14:13:25 +0000 (0:00:00.634) 0:00:36.404 *********** 2025-06-02 14:23:38.522844 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.522856 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.522869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.522881 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.522893 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.522904 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.522915 | orchestrator | 2025-06-02 14:23:38.522926 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 14:23:38.522937 | orchestrator | Monday 02 June 2025 14:13:26 +0000 (0:00:00.878) 0:00:37.283 *********** 2025-06-02 14:23:38.522947 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.522958 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.522969 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.522980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.522990 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.523001 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.523012 | orchestrator | 2025-06-02 14:23:38.523023 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 14:23:38.523034 | orchestrator | Monday 02 June 2025 14:13:26 +0000 (0:00:00.548) 0:00:37.831 *********** 2025-06-02 14:23:38.523045 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.523055 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.523066 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.523077 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.523087 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.523098 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.523108 | orchestrator | 2025-06-02 14:23:38.523119 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 14:23:38.523130 | orchestrator | Monday 02 June 2025 14:13:27 +0000 (0:00:00.961) 0:00:38.793 *********** 2025-06-02 14:23:38.523141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.523152 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.523162 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.523173 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.523184 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.523201 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.523212 | orchestrator | 2025-06-02 14:23:38.523223 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 14:23:38.523233 | orchestrator | Monday 02 June 2025 14:13:28 +0000 (0:00:01.074) 0:00:39.868 *********** 2025-06-02 14:23:38.523244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.523255 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 14:23:38.523266 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 14:23:38.523277 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 14:23:38.523288 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 14:23:38.523298 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 14:23:38.523314 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 14:23:38.523325 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 14:23:38.523336 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 14:23:38.523346 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 14:23:38.523357 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 14:23:38.523367 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 14:23:38.523378 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 14:23:38.523388 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 14:23:38.523399 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 14:23:38.523410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 14:23:38.523421 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 14:23:38.523431 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 14:23:38.523442 | orchestrator | 2025-06-02 14:23:38.523453 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 14:23:38.523470 | orchestrator | Monday 02 June 2025 14:13:32 +0000 (0:00:04.188) 0:00:44.057 *********** 2025-06-02 14:23:38.523488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.523500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.523511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.523521 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 14:23:38.523532 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 14:23:38.523543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 14:23:38.523553 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.523564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 14:23:38.523574 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 14:23:38.523585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 14:23:38.523596 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.523607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 14:23:38.523623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 14:23:38.523634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 14:23:38.523662 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.523673 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 14:23:38.523690 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 14:23:38.523710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 14:23:38.523728 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.523747 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.523768 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 14:23:38.523788 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 14:23:38.523800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 14:23:38.523819 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.523830 | orchestrator | 2025-06-02 14:23:38.523841 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 14:23:38.523852 | orchestrator | Monday 02 June 2025 14:13:33 +0000 (0:00:00.575) 0:00:44.633 *********** 2025-06-02 14:23:38.523863 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.523874 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.523884 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.523895 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.523906 | orchestrator | 2025-06-02 14:23:38.523918 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 14:23:38.523930 | orchestrator | Monday 02 June 2025 14:13:34 +0000 (0:00:01.022) 0:00:45.655 *********** 2025-06-02 14:23:38.523941 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.523952 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.523962 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.523973 | orchestrator | 2025-06-02 14:23:38.523984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 14:23:38.523995 | orchestrator | Monday 02 June 2025 14:13:34 +0000 (0:00:00.301) 0:00:45.957 *********** 2025-06-02 14:23:38.524005 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524016 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.524027 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.524038 | orchestrator | 2025-06-02 14:23:38.524049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 14:23:38.524060 | orchestrator | Monday 02 June 2025 14:13:35 +0000 (0:00:00.445) 0:00:46.402 *********** 2025-06-02 14:23:38.524071 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524082 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.524093 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.524103 | orchestrator | 2025-06-02 14:23:38.524114 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 14:23:38.524125 | orchestrator | Monday 02 June 2025 14:13:35 +0000 (0:00:00.285) 0:00:46.688 *********** 2025-06-02 14:23:38.524136 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.524147 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.524158 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.524169 | orchestrator | 2025-06-02 14:23:38.524179 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 14:23:38.524190 | orchestrator | Monday 02 June 2025 14:13:35 +0000 (0:00:00.347) 0:00:47.035 *********** 2025-06-02 14:23:38.524201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.524212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.524228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.524239 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524250 | orchestrator | 2025-06-02 14:23:38.524261 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 14:23:38.524272 | orchestrator | Monday 02 June 2025 14:13:36 +0000 (0:00:00.310) 0:00:47.345 *********** 2025-06-02 14:23:38.524283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.524293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.524304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.524315 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524326 | orchestrator | 2025-06-02 14:23:38.524336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 14:23:38.524347 | orchestrator | Monday 02 June 2025 14:13:36 +0000 (0:00:00.347) 0:00:47.692 *********** 2025-06-02 14:23:38.524358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.524376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.524386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.524400 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524418 | orchestrator | 2025-06-02 14:23:38.524429 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 14:23:38.524440 | orchestrator | Monday 02 June 2025 14:13:37 +0000 (0:00:00.541) 0:00:48.234 *********** 2025-06-02 14:23:38.524451 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.524462 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.524472 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.524483 | orchestrator | 2025-06-02 14:23:38.524494 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 14:23:38.524504 | orchestrator | Monday 02 June 2025 14:13:37 +0000 (0:00:00.518) 0:00:48.753 *********** 2025-06-02 14:23:38.524515 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 14:23:38.524526 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 14:23:38.524537 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 14:23:38.524547 | orchestrator | 2025-06-02 14:23:38.524558 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 14:23:38.524569 | orchestrator | Monday 02 June 2025 14:13:38 +0000 (0:00:00.841) 0:00:49.595 *********** 2025-06-02 14:23:38.524586 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.524598 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.524609 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.524620 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 14:23:38.524631 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 14:23:38.524659 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 14:23:38.524671 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 14:23:38.524682 | orchestrator | 2025-06-02 14:23:38.524693 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 14:23:38.524704 | orchestrator | Monday 02 June 2025 14:13:39 +0000 (0:00:01.043) 0:00:50.638 *********** 2025-06-02 14:23:38.524714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.524725 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.524736 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.524747 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 14:23:38.524758 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 14:23:38.524769 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 14:23:38.524779 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 14:23:38.524790 | orchestrator | 2025-06-02 14:23:38.524801 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.524812 | orchestrator | Monday 02 June 2025 14:13:42 +0000 (0:00:02.928) 0:00:53.567 *********** 2025-06-02 14:23:38.524823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.524836 | orchestrator | 2025-06-02 14:23:38.524847 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.524858 | orchestrator | Monday 02 June 2025 14:13:44 +0000 (0:00:01.777) 0:00:55.344 *********** 2025-06-02 14:23:38.524869 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.524887 | orchestrator | 2025-06-02 14:23:38.524898 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.524908 | orchestrator | Monday 02 June 2025 14:13:46 +0000 (0:00:01.770) 0:00:57.117 *********** 2025-06-02 14:23:38.524919 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.524930 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.524941 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.524952 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.524963 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.524974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.524984 | orchestrator | 2025-06-02 14:23:38.524995 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.525011 | orchestrator | Monday 02 June 2025 14:13:47 +0000 (0:00:01.103) 0:00:58.221 *********** 2025-06-02 14:23:38.525022 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525033 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525044 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525055 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.525066 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.525077 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.525087 | orchestrator | 2025-06-02 14:23:38.525098 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.525109 | orchestrator | Monday 02 June 2025 14:13:48 +0000 (0:00:01.599) 0:00:59.821 *********** 2025-06-02 14:23:38.525120 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525131 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525142 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525153 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.525164 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.525174 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.525185 | orchestrator | 2025-06-02 14:23:38.525196 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.525207 | orchestrator | Monday 02 June 2025 14:13:50 +0000 (0:00:01.287) 0:01:01.109 *********** 2025-06-02 14:23:38.525218 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525229 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525239 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525250 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.525261 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.525272 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.525283 | orchestrator | 2025-06-02 14:23:38.525293 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.525304 | orchestrator | Monday 02 June 2025 14:13:51 +0000 (0:00:01.124) 0:01:02.233 *********** 2025-06-02 14:23:38.525315 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.525326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.525336 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.525347 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.525358 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.525369 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.525380 | orchestrator | 2025-06-02 14:23:38.525391 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.525402 | orchestrator | Monday 02 June 2025 14:13:52 +0000 (0:00:01.047) 0:01:03.280 *********** 2025-06-02 14:23:38.525417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525429 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525440 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525451 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.525461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.525472 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.525483 | orchestrator | 2025-06-02 14:23:38.525494 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.525505 | orchestrator | Monday 02 June 2025 14:13:52 +0000 (0:00:00.564) 0:01:03.845 *********** 2025-06-02 14:23:38.525524 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525557 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.525567 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.525578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.525589 | orchestrator | 2025-06-02 14:23:38.525600 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.525610 | orchestrator | Monday 02 June 2025 14:13:53 +0000 (0:00:01.088) 0:01:04.933 *********** 2025-06-02 14:23:38.525621 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.525632 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.525699 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.525711 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.525722 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.525733 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.525743 | orchestrator | 2025-06-02 14:23:38.525754 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.525765 | orchestrator | Monday 02 June 2025 14:13:55 +0000 (0:00:01.609) 0:01:06.543 *********** 2025-06-02 14:23:38.525776 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.525787 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.525797 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.525807 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.525816 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.525826 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.525835 | orchestrator | 2025-06-02 14:23:38.525845 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.525854 | orchestrator | Monday 02 June 2025 14:13:57 +0000 (0:00:02.108) 0:01:08.651 *********** 2025-06-02 14:23:38.525864 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.525874 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.525883 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.525893 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.525903 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.525912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.525922 | orchestrator | 2025-06-02 14:23:38.525931 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.525941 | orchestrator | Monday 02 June 2025 14:13:58 +0000 (0:00:00.622) 0:01:09.274 *********** 2025-06-02 14:23:38.525951 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.525960 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.525970 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.525980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.525989 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.525999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.526009 | orchestrator | 2025-06-02 14:23:38.526075 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.526087 | orchestrator | Monday 02 June 2025 14:13:59 +0000 (0:00:00.839) 0:01:10.113 *********** 2025-06-02 14:23:38.526096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.526106 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.526115 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.526125 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.526134 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.526144 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.526153 | orchestrator | 2025-06-02 14:23:38.526169 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.526179 | orchestrator | Monday 02 June 2025 14:13:59 +0000 (0:00:00.684) 0:01:10.798 *********** 2025-06-02 14:23:38.526188 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.526199 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.526208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.526226 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.526236 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.526245 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.526255 | orchestrator | 2025-06-02 14:23:38.526265 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.526274 | orchestrator | Monday 02 June 2025 14:14:00 +0000 (0:00:00.962) 0:01:11.761 *********** 2025-06-02 14:23:38.526284 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.526293 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.526303 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.526312 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.526322 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.526331 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.526341 | orchestrator | 2025-06-02 14:23:38.526350 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.526360 | orchestrator | Monday 02 June 2025 14:14:01 +0000 (0:00:00.820) 0:01:12.581 *********** 2025-06-02 14:23:38.526370 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.526379 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.526389 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.526398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.526408 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.526417 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.526427 | orchestrator | 2025-06-02 14:23:38.526436 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.526446 | orchestrator | Monday 02 June 2025 14:14:02 +0000 (0:00:00.822) 0:01:13.404 *********** 2025-06-02 14:23:38.526456 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.526465 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.526475 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.526484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.526493 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.526503 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.526512 | orchestrator | 2025-06-02 14:23:38.526522 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.526549 | orchestrator | Monday 02 June 2025 14:14:02 +0000 (0:00:00.612) 0:01:14.016 *********** 2025-06-02 14:23:38.526560 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.526569 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.526579 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.526589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.526598 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.526608 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.526618 | orchestrator | 2025-06-02 14:23:38.526627 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.526637 | orchestrator | Monday 02 June 2025 14:14:03 +0000 (0:00:00.766) 0:01:14.783 *********** 2025-06-02 14:23:38.526664 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.526674 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.526683 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.526693 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.526702 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.526712 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.526721 | orchestrator | 2025-06-02 14:23:38.526731 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.526741 | orchestrator | Monday 02 June 2025 14:14:04 +0000 (0:00:00.643) 0:01:15.426 *********** 2025-06-02 14:23:38.526750 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.526760 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.526769 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.526779 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.526788 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.526798 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.526807 | orchestrator | 2025-06-02 14:23:38.526817 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 14:23:38.526833 | orchestrator | Monday 02 June 2025 14:14:05 +0000 (0:00:01.279) 0:01:16.705 *********** 2025-06-02 14:23:38.526843 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.526853 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.526862 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.526872 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.526881 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.526891 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.526900 | orchestrator | 2025-06-02 14:23:38.526910 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 14:23:38.526920 | orchestrator | Monday 02 June 2025 14:14:07 +0000 (0:00:01.822) 0:01:18.528 *********** 2025-06-02 14:23:38.526929 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.526939 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.526948 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.526958 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.526967 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.526977 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.526986 | orchestrator | 2025-06-02 14:23:38.526996 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 14:23:38.527006 | orchestrator | Monday 02 June 2025 14:14:09 +0000 (0:00:01.884) 0:01:20.413 *********** 2025-06-02 14:23:38.527016 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.527026 | orchestrator | 2025-06-02 14:23:38.527035 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 14:23:38.527045 | orchestrator | Monday 02 June 2025 14:14:10 +0000 (0:00:01.182) 0:01:21.595 *********** 2025-06-02 14:23:38.527054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527074 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.527083 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.527093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.527102 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.527111 | orchestrator | 2025-06-02 14:23:38.527125 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 14:23:38.527135 | orchestrator | Monday 02 June 2025 14:14:11 +0000 (0:00:00.802) 0:01:22.397 *********** 2025-06-02 14:23:38.527145 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527155 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527164 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.527174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.527183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.527192 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.527202 | orchestrator | 2025-06-02 14:23:38.527212 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 14:23:38.527221 | orchestrator | Monday 02 June 2025 14:14:11 +0000 (0:00:00.568) 0:01:22.966 *********** 2025-06-02 14:23:38.527231 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527241 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527251 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527260 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527270 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527280 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527289 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527299 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527315 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 14:23:38.527324 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527334 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527343 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 14:23:38.527353 | orchestrator | 2025-06-02 14:23:38.527369 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 14:23:38.527379 | orchestrator | Monday 02 June 2025 14:14:13 +0000 (0:00:01.563) 0:01:24.530 *********** 2025-06-02 14:23:38.527389 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.527398 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.527407 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.527417 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.527426 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.527436 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.527446 | orchestrator | 2025-06-02 14:23:38.527455 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 14:23:38.527465 | orchestrator | Monday 02 June 2025 14:14:14 +0000 (0:00:00.923) 0:01:25.453 *********** 2025-06-02 14:23:38.527475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527484 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527494 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.527503 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.527513 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.527522 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.527532 | orchestrator | 2025-06-02 14:23:38.527541 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 14:23:38.527551 | orchestrator | Monday 02 June 2025 14:14:15 +0000 (0:00:00.902) 0:01:26.356 *********** 2025-06-02 14:23:38.527560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527570 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.527589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.527599 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.527608 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.527617 | orchestrator | 2025-06-02 14:23:38.527627 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 14:23:38.527637 | orchestrator | Monday 02 June 2025 14:14:15 +0000 (0:00:00.702) 0:01:27.059 *********** 2025-06-02 14:23:38.527662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.527691 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.527701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.527710 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.527719 | orchestrator | 2025-06-02 14:23:38.527729 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 14:23:38.527739 | orchestrator | Monday 02 June 2025 14:14:16 +0000 (0:00:00.840) 0:01:27.899 *********** 2025-06-02 14:23:38.527749 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.527759 | orchestrator | 2025-06-02 14:23:38.527768 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 14:23:38.527778 | orchestrator | Monday 02 June 2025 14:14:18 +0000 (0:00:01.273) 0:01:29.173 *********** 2025-06-02 14:23:38.527788 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.527797 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.527807 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.527816 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.527826 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.527843 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.527852 | orchestrator | 2025-06-02 14:23:38.527862 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 14:23:38.527872 | orchestrator | Monday 02 June 2025 14:15:20 +0000 (0:01:02.298) 0:02:31.471 *********** 2025-06-02 14:23:38.527881 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.527899 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.527909 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.527919 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.527928 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.527938 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.527948 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.527957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.527967 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.527976 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.527986 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.527996 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528005 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.528015 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.528024 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.528034 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528044 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.528053 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.528063 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.528073 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528082 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 14:23:38.528092 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 14:23:38.528102 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 14:23:38.528116 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528126 | orchestrator | 2025-06-02 14:23:38.528136 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 14:23:38.528146 | orchestrator | Monday 02 June 2025 14:15:21 +0000 (0:00:00.912) 0:02:32.384 *********** 2025-06-02 14:23:38.528155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528165 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528174 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528193 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528203 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528215 | orchestrator | 2025-06-02 14:23:38.528231 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 14:23:38.528252 | orchestrator | Monday 02 June 2025 14:15:21 +0000 (0:00:00.677) 0:02:33.061 *********** 2025-06-02 14:23:38.528276 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528290 | orchestrator | 2025-06-02 14:23:38.528305 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 14:23:38.528320 | orchestrator | Monday 02 June 2025 14:15:22 +0000 (0:00:00.169) 0:02:33.230 *********** 2025-06-02 14:23:38.528335 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528377 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528395 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528412 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528428 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528444 | orchestrator | 2025-06-02 14:23:38.528454 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 14:23:38.528463 | orchestrator | Monday 02 June 2025 14:15:23 +0000 (0:00:01.070) 0:02:34.301 *********** 2025-06-02 14:23:38.528473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528483 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528492 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528501 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528511 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528520 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528530 | orchestrator | 2025-06-02 14:23:38.528539 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 14:23:38.528549 | orchestrator | Monday 02 June 2025 14:15:24 +0000 (0:00:00.811) 0:02:35.112 *********** 2025-06-02 14:23:38.528559 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528578 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528587 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528596 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528606 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528615 | orchestrator | 2025-06-02 14:23:38.528625 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 14:23:38.528635 | orchestrator | Monday 02 June 2025 14:15:25 +0000 (0:00:01.085) 0:02:36.198 *********** 2025-06-02 14:23:38.528663 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.528674 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.528683 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.528693 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.528702 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.528712 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.528721 | orchestrator | 2025-06-02 14:23:38.528731 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 14:23:38.528741 | orchestrator | Monday 02 June 2025 14:15:27 +0000 (0:00:02.303) 0:02:38.501 *********** 2025-06-02 14:23:38.528750 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.528760 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.528770 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.528779 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.528788 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.528798 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.528807 | orchestrator | 2025-06-02 14:23:38.528823 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 14:23:38.528833 | orchestrator | Monday 02 June 2025 14:15:28 +0000 (0:00:00.774) 0:02:39.276 *********** 2025-06-02 14:23:38.528843 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.528855 | orchestrator | 2025-06-02 14:23:38.528865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 14:23:38.528874 | orchestrator | Monday 02 June 2025 14:15:29 +0000 (0:00:01.099) 0:02:40.375 *********** 2025-06-02 14:23:38.528884 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528893 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.528912 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.528922 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.528931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.528941 | orchestrator | 2025-06-02 14:23:38.528950 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 14:23:38.528960 | orchestrator | Monday 02 June 2025 14:15:30 +0000 (0:00:00.872) 0:02:41.248 *********** 2025-06-02 14:23:38.528977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.528987 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.528996 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529016 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529025 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529035 | orchestrator | 2025-06-02 14:23:38.529045 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 14:23:38.529054 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.871) 0:02:42.119 *********** 2025-06-02 14:23:38.529064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529074 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529083 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529093 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529102 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529112 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529121 | orchestrator | 2025-06-02 14:23:38.529131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 14:23:38.529149 | orchestrator | Monday 02 June 2025 14:15:31 +0000 (0:00:00.689) 0:02:42.809 *********** 2025-06-02 14:23:38.529159 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529178 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529187 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529197 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529206 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529216 | orchestrator | 2025-06-02 14:23:38.529226 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 14:23:38.529235 | orchestrator | Monday 02 June 2025 14:15:32 +0000 (0:00:01.058) 0:02:43.867 *********** 2025-06-02 14:23:38.529245 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529273 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529283 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529302 | orchestrator | 2025-06-02 14:23:38.529311 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 14:23:38.529321 | orchestrator | Monday 02 June 2025 14:15:33 +0000 (0:00:00.611) 0:02:44.478 *********** 2025-06-02 14:23:38.529331 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529350 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529359 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529369 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529378 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529388 | orchestrator | 2025-06-02 14:23:38.529397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 14:23:38.529407 | orchestrator | Monday 02 June 2025 14:15:34 +0000 (0:00:00.721) 0:02:45.199 *********** 2025-06-02 14:23:38.529417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529436 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529445 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529455 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529464 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529474 | orchestrator | 2025-06-02 14:23:38.529483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 14:23:38.529493 | orchestrator | Monday 02 June 2025 14:15:34 +0000 (0:00:00.548) 0:02:45.748 *********** 2025-06-02 14:23:38.529502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.529512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.529528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.529538 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.529547 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.529557 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.529566 | orchestrator | 2025-06-02 14:23:38.529576 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 14:23:38.529586 | orchestrator | Monday 02 June 2025 14:15:35 +0000 (0:00:00.860) 0:02:46.608 *********** 2025-06-02 14:23:38.529596 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.529605 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.529615 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.529624 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.529634 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.529686 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.529696 | orchestrator | 2025-06-02 14:23:38.529707 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 14:23:38.529716 | orchestrator | Monday 02 June 2025 14:15:36 +0000 (0:00:01.268) 0:02:47.877 *********** 2025-06-02 14:23:38.529731 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.529741 | orchestrator | 2025-06-02 14:23:38.529751 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 14:23:38.529760 | orchestrator | Monday 02 June 2025 14:15:38 +0000 (0:00:01.245) 0:02:49.122 *********** 2025-06-02 14:23:38.529770 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 14:23:38.529780 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 14:23:38.529789 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 14:23:38.529799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529818 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 14:23:38.529828 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 14:23:38.529837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529847 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529856 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 14:23:38.529866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529875 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529895 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 14:23:38.529914 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.529923 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.529931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529939 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529947 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.529955 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.529967 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 14:23:38.529976 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.529983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.529991 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.529999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.530007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530045 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 14:23:38.530055 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.530063 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.530070 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530086 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530094 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 14:23:38.530102 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530134 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530142 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 14:23:38.530150 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530158 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530166 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530174 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530189 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 14:23:38.530205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530213 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530221 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530228 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530236 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530244 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530252 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 14:23:38.530260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530268 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530283 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530295 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 14:23:38.530311 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530319 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530335 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 14:23:38.530350 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 14:23:38.530409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530416 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530424 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530432 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530440 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 14:23:38.530463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530476 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530484 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530492 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 14:23:38.530500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530507 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 14:23:38.530515 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 14:23:38.530523 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 14:23:38.530531 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 14:23:38.530539 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 14:23:38.530547 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 14:23:38.530554 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 14:23:38.530562 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 14:23:38.530570 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 14:23:38.530578 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 14:23:38.530586 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 14:23:38.530593 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 14:23:38.530601 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 14:23:38.530609 | orchestrator | 2025-06-02 14:23:38.530617 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 14:23:38.530625 | orchestrator | Monday 02 June 2025 14:15:44 +0000 (0:00:06.579) 0:02:55.702 *********** 2025-06-02 14:23:38.530633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.530655 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.530663 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.530671 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.530679 | orchestrator | 2025-06-02 14:23:38.530687 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 14:23:38.530695 | orchestrator | Monday 02 June 2025 14:15:46 +0000 (0:00:01.431) 0:02:57.133 *********** 2025-06-02 14:23:38.530703 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530712 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530720 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530733 | orchestrator | 2025-06-02 14:23:38.530741 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 14:23:38.530749 | orchestrator | Monday 02 June 2025 14:15:46 +0000 (0:00:00.755) 0:02:57.889 *********** 2025-06-02 14:23:38.530757 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530769 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530777 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.530785 | orchestrator | 2025-06-02 14:23:38.530793 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 14:23:38.530801 | orchestrator | Monday 02 June 2025 14:15:48 +0000 (0:00:01.691) 0:02:59.581 *********** 2025-06-02 14:23:38.530809 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.530817 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.530824 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.530832 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.530840 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.530848 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.530856 | orchestrator | 2025-06-02 14:23:38.530863 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 14:23:38.530871 | orchestrator | Monday 02 June 2025 14:15:49 +0000 (0:00:00.778) 0:03:00.360 *********** 2025-06-02 14:23:38.530879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.530887 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.530895 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.530903 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.530911 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.530918 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.530926 | orchestrator | 2025-06-02 14:23:38.530934 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 14:23:38.530942 | orchestrator | Monday 02 June 2025 14:15:50 +0000 (0:00:01.071) 0:03:01.431 *********** 2025-06-02 14:23:38.530950 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.530957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.530965 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.530973 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.530981 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.530989 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.530996 | orchestrator | 2025-06-02 14:23:38.531004 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 14:23:38.531012 | orchestrator | Monday 02 June 2025 14:15:51 +0000 (0:00:00.774) 0:03:02.205 *********** 2025-06-02 14:23:38.531020 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531028 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531041 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531049 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531057 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531065 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531072 | orchestrator | 2025-06-02 14:23:38.531080 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 14:23:38.531088 | orchestrator | Monday 02 June 2025 14:15:51 +0000 (0:00:00.786) 0:03:02.992 *********** 2025-06-02 14:23:38.531096 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531112 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531119 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531127 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531135 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531143 | orchestrator | 2025-06-02 14:23:38.531151 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 14:23:38.531163 | orchestrator | Monday 02 June 2025 14:15:52 +0000 (0:00:00.522) 0:03:03.514 *********** 2025-06-02 14:23:38.531171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531179 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531187 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531195 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531203 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531211 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531218 | orchestrator | 2025-06-02 14:23:38.531226 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 14:23:38.531234 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.637) 0:03:04.152 *********** 2025-06-02 14:23:38.531242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531250 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531257 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531273 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531281 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531288 | orchestrator | 2025-06-02 14:23:38.531296 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 14:23:38.531305 | orchestrator | Monday 02 June 2025 14:15:53 +0000 (0:00:00.527) 0:03:04.679 *********** 2025-06-02 14:23:38.531312 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531320 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531336 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531344 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531351 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531359 | orchestrator | 2025-06-02 14:23:38.531367 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 14:23:38.531375 | orchestrator | Monday 02 June 2025 14:15:54 +0000 (0:00:00.691) 0:03:05.371 *********** 2025-06-02 14:23:38.531383 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531406 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.531414 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.531422 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.531430 | orchestrator | 2025-06-02 14:23:38.531438 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 14:23:38.531445 | orchestrator | Monday 02 June 2025 14:15:57 +0000 (0:00:03.078) 0:03:08.449 *********** 2025-06-02 14:23:38.531453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531461 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531469 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531477 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.531484 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.531496 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.531505 | orchestrator | 2025-06-02 14:23:38.531513 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 14:23:38.531521 | orchestrator | Monday 02 June 2025 14:15:58 +0000 (0:00:00.732) 0:03:09.181 *********** 2025-06-02 14:23:38.531529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531552 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.531560 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.531568 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.531576 | orchestrator | 2025-06-02 14:23:38.531584 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 14:23:38.531591 | orchestrator | Monday 02 June 2025 14:15:58 +0000 (0:00:00.559) 0:03:09.741 *********** 2025-06-02 14:23:38.531605 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531612 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531636 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531666 | orchestrator | 2025-06-02 14:23:38.531674 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 14:23:38.531682 | orchestrator | Monday 02 June 2025 14:15:59 +0000 (0:00:00.708) 0:03:10.449 *********** 2025-06-02 14:23:38.531690 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531698 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531706 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531713 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.531722 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.531730 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.531738 | orchestrator | 2025-06-02 14:23:38.531746 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 14:23:38.531759 | orchestrator | Monday 02 June 2025 14:15:59 +0000 (0:00:00.602) 0:03:11.052 *********** 2025-06-02 14:23:38.531767 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531775 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531791 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 14:23:38.531802 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 14:23:38.531812 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531820 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 14:23:38.531828 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 14:23:38.531836 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531844 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 14:23:38.531852 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 14:23:38.531860 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531873 | orchestrator | 2025-06-02 14:23:38.531881 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 14:23:38.531889 | orchestrator | Monday 02 June 2025 14:16:00 +0000 (0:00:00.934) 0:03:11.986 *********** 2025-06-02 14:23:38.531897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531905 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531913 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531921 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.531928 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.531943 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.531951 | orchestrator | 2025-06-02 14:23:38.531959 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 14:23:38.531967 | orchestrator | Monday 02 June 2025 14:16:01 +0000 (0:00:00.612) 0:03:12.599 *********** 2025-06-02 14:23:38.531975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.531983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.531991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.531998 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.532006 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.532014 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.532022 | orchestrator | 2025-06-02 14:23:38.532030 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 14:23:38.532038 | orchestrator | Monday 02 June 2025 14:16:02 +0000 (0:00:00.827) 0:03:13.426 *********** 2025-06-02 14:23:38.532046 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532054 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532061 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532069 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.532077 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.532084 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.532092 | orchestrator | 2025-06-02 14:23:38.532100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 14:23:38.532108 | orchestrator | Monday 02 June 2025 14:16:03 +0000 (0:00:00.657) 0:03:14.083 *********** 2025-06-02 14:23:38.532116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532124 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532139 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.532147 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.532155 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.532163 | orchestrator | 2025-06-02 14:23:38.532170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 14:23:38.532178 | orchestrator | Monday 02 June 2025 14:16:03 +0000 (0:00:00.864) 0:03:14.948 *********** 2025-06-02 14:23:38.532186 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532194 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532202 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532214 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.532223 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.532230 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.532238 | orchestrator | 2025-06-02 14:23:38.532246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 14:23:38.532254 | orchestrator | Monday 02 June 2025 14:16:04 +0000 (0:00:00.644) 0:03:15.593 *********** 2025-06-02 14:23:38.532262 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532270 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532285 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.532293 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.532301 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.532309 | orchestrator | 2025-06-02 14:23:38.532317 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 14:23:38.532330 | orchestrator | Monday 02 June 2025 14:16:05 +0000 (0:00:01.224) 0:03:16.817 *********** 2025-06-02 14:23:38.532338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 14:23:38.532346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 14:23:38.532354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 14:23:38.532362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532369 | orchestrator | 2025-06-02 14:23:38.532377 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 14:23:38.532386 | orchestrator | Monday 02 June 2025 14:16:06 +0000 (0:00:00.408) 0:03:17.226 *********** 2025-06-02 14:23:38.532393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 14:23:38.532401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 14:23:38.532409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 14:23:38.532417 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532425 | orchestrator | 2025-06-02 14:23:38.532432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 14:23:38.532440 | orchestrator | Monday 02 June 2025 14:16:06 +0000 (0:00:00.407) 0:03:17.633 *********** 2025-06-02 14:23:38.532448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 14:23:38.532456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 14:23:38.532464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 14:23:38.532472 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532480 | orchestrator | 2025-06-02 14:23:38.532488 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 14:23:38.532496 | orchestrator | Monday 02 June 2025 14:16:06 +0000 (0:00:00.422) 0:03:18.055 *********** 2025-06-02 14:23:38.532504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532511 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532519 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532527 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.532535 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.532543 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.532551 | orchestrator | 2025-06-02 14:23:38.532558 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 14:23:38.532566 | orchestrator | Monday 02 June 2025 14:16:07 +0000 (0:00:00.626) 0:03:18.682 *********** 2025-06-02 14:23:38.532574 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 14:23:38.532582 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.532590 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 14:23:38.532598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.532606 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 14:23:38.532613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.532621 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 14:23:38.532633 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 14:23:38.532674 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 14:23:38.532683 | orchestrator | 2025-06-02 14:23:38.532691 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 14:23:38.532699 | orchestrator | Monday 02 June 2025 14:16:09 +0000 (0:00:01.910) 0:03:20.593 *********** 2025-06-02 14:23:38.532707 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.532715 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.532723 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.532730 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.532738 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.532746 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.532754 | orchestrator | 2025-06-02 14:23:38.532762 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.532770 | orchestrator | Monday 02 June 2025 14:16:12 +0000 (0:00:02.493) 0:03:23.086 *********** 2025-06-02 14:23:38.532788 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.532796 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.532803 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.532811 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.532819 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.532827 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.532835 | orchestrator | 2025-06-02 14:23:38.532843 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 14:23:38.532851 | orchestrator | Monday 02 June 2025 14:16:12 +0000 (0:00:00.914) 0:03:24.000 *********** 2025-06-02 14:23:38.532858 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.532866 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.532874 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.532882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.532890 | orchestrator | 2025-06-02 14:23:38.532898 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 14:23:38.532906 | orchestrator | Monday 02 June 2025 14:16:13 +0000 (0:00:00.846) 0:03:24.846 *********** 2025-06-02 14:23:38.532914 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.532922 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.532929 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.532937 | orchestrator | 2025-06-02 14:23:38.532945 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 14:23:38.532959 | orchestrator | Monday 02 June 2025 14:16:14 +0000 (0:00:00.297) 0:03:25.144 *********** 2025-06-02 14:23:38.532967 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.532975 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.532983 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.532990 | orchestrator | 2025-06-02 14:23:38.532998 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 14:23:38.533006 | orchestrator | Monday 02 June 2025 14:16:15 +0000 (0:00:01.368) 0:03:26.513 *********** 2025-06-02 14:23:38.533014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.533022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.533030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.533038 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.533046 | orchestrator | 2025-06-02 14:23:38.533053 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 14:23:38.533061 | orchestrator | Monday 02 June 2025 14:16:16 +0000 (0:00:00.589) 0:03:27.102 *********** 2025-06-02 14:23:38.533069 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.533077 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.533085 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.533093 | orchestrator | 2025-06-02 14:23:38.533101 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 14:23:38.533108 | orchestrator | Monday 02 June 2025 14:16:16 +0000 (0:00:00.314) 0:03:27.417 *********** 2025-06-02 14:23:38.533116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.533124 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.533132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.533140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.533148 | orchestrator | 2025-06-02 14:23:38.533156 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 14:23:38.533164 | orchestrator | Monday 02 June 2025 14:16:17 +0000 (0:00:00.969) 0:03:28.387 *********** 2025-06-02 14:23:38.533172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.533180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.533187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.533195 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533209 | orchestrator | 2025-06-02 14:23:38.533216 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 14:23:38.533222 | orchestrator | Monday 02 June 2025 14:16:17 +0000 (0:00:00.424) 0:03:28.812 *********** 2025-06-02 14:23:38.533229 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533236 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.533242 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.533249 | orchestrator | 2025-06-02 14:23:38.533255 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 14:23:38.533262 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.317) 0:03:29.129 *********** 2025-06-02 14:23:38.533269 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533275 | orchestrator | 2025-06-02 14:23:38.533282 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 14:23:38.533289 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.195) 0:03:29.325 *********** 2025-06-02 14:23:38.533295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533302 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.533308 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.533315 | orchestrator | 2025-06-02 14:23:38.533321 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 14:23:38.533332 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.295) 0:03:29.620 *********** 2025-06-02 14:23:38.533339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533345 | orchestrator | 2025-06-02 14:23:38.533352 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 14:23:38.533359 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.212) 0:03:29.832 *********** 2025-06-02 14:23:38.533365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533372 | orchestrator | 2025-06-02 14:23:38.533378 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 14:23:38.533385 | orchestrator | Monday 02 June 2025 14:16:18 +0000 (0:00:00.234) 0:03:30.066 *********** 2025-06-02 14:23:38.533392 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533398 | orchestrator | 2025-06-02 14:23:38.533405 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 14:23:38.533412 | orchestrator | Monday 02 June 2025 14:16:19 +0000 (0:00:00.398) 0:03:30.465 *********** 2025-06-02 14:23:38.533419 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533425 | orchestrator | 2025-06-02 14:23:38.533432 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 14:23:38.533439 | orchestrator | Monday 02 June 2025 14:16:19 +0000 (0:00:00.259) 0:03:30.725 *********** 2025-06-02 14:23:38.533445 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533452 | orchestrator | 2025-06-02 14:23:38.533458 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 14:23:38.533465 | orchestrator | Monday 02 June 2025 14:16:19 +0000 (0:00:00.241) 0:03:30.966 *********** 2025-06-02 14:23:38.533472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.533479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.533485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.533492 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533499 | orchestrator | 2025-06-02 14:23:38.533505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 14:23:38.533512 | orchestrator | Monday 02 June 2025 14:16:20 +0000 (0:00:00.417) 0:03:31.383 *********** 2025-06-02 14:23:38.533519 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533525 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.533532 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.533539 | orchestrator | 2025-06-02 14:23:38.533550 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 14:23:38.533557 | orchestrator | Monday 02 June 2025 14:16:20 +0000 (0:00:00.326) 0:03:31.710 *********** 2025-06-02 14:23:38.533568 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533575 | orchestrator | 2025-06-02 14:23:38.533582 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 14:23:38.533588 | orchestrator | Monday 02 June 2025 14:16:20 +0000 (0:00:00.212) 0:03:31.922 *********** 2025-06-02 14:23:38.533595 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533601 | orchestrator | 2025-06-02 14:23:38.533608 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 14:23:38.533615 | orchestrator | Monday 02 June 2025 14:16:21 +0000 (0:00:00.230) 0:03:32.152 *********** 2025-06-02 14:23:38.533621 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.533628 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.533635 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.533654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.533661 | orchestrator | 2025-06-02 14:23:38.533668 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 14:23:38.533674 | orchestrator | Monday 02 June 2025 14:16:22 +0000 (0:00:01.125) 0:03:33.278 *********** 2025-06-02 14:23:38.533681 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.533688 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.533695 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.533701 | orchestrator | 2025-06-02 14:23:38.533708 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 14:23:38.533715 | orchestrator | Monday 02 June 2025 14:16:22 +0000 (0:00:00.391) 0:03:33.670 *********** 2025-06-02 14:23:38.533721 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.533728 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.533735 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.533741 | orchestrator | 2025-06-02 14:23:38.533748 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 14:23:38.533755 | orchestrator | Monday 02 June 2025 14:16:23 +0000 (0:00:01.231) 0:03:34.901 *********** 2025-06-02 14:23:38.533761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.533768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.533775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.533782 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.533788 | orchestrator | 2025-06-02 14:23:38.533795 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 14:23:38.533801 | orchestrator | Monday 02 June 2025 14:16:24 +0000 (0:00:01.014) 0:03:35.916 *********** 2025-06-02 14:23:38.533808 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.533814 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.533821 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.533828 | orchestrator | 2025-06-02 14:23:38.533835 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 14:23:38.533841 | orchestrator | Monday 02 June 2025 14:16:25 +0000 (0:00:00.375) 0:03:36.292 *********** 2025-06-02 14:23:38.533848 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.533854 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.533861 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.533868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.533875 | orchestrator | 2025-06-02 14:23:38.533881 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 14:23:38.533888 | orchestrator | Monday 02 June 2025 14:16:26 +0000 (0:00:01.081) 0:03:37.373 *********** 2025-06-02 14:23:38.533898 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.533905 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.533912 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.533918 | orchestrator | 2025-06-02 14:23:38.533925 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 14:23:38.533937 | orchestrator | Monday 02 June 2025 14:16:26 +0000 (0:00:00.346) 0:03:37.720 *********** 2025-06-02 14:23:38.533943 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.533950 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.533956 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.533963 | orchestrator | 2025-06-02 14:23:38.533970 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 14:23:38.533977 | orchestrator | Monday 02 June 2025 14:16:27 +0000 (0:00:01.246) 0:03:38.966 *********** 2025-06-02 14:23:38.533983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.533990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.533997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.534004 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.534010 | orchestrator | 2025-06-02 14:23:38.534105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 14:23:38.534113 | orchestrator | Monday 02 June 2025 14:16:28 +0000 (0:00:00.980) 0:03:39.947 *********** 2025-06-02 14:23:38.534119 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.534126 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.534133 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.534139 | orchestrator | 2025-06-02 14:23:38.534146 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 14:23:38.534153 | orchestrator | Monday 02 June 2025 14:16:29 +0000 (0:00:00.388) 0:03:40.335 *********** 2025-06-02 14:23:38.534160 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534166 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534173 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.534186 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.534193 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.534199 | orchestrator | 2025-06-02 14:23:38.534206 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 14:23:38.534213 | orchestrator | Monday 02 June 2025 14:16:30 +0000 (0:00:01.118) 0:03:41.454 *********** 2025-06-02 14:23:38.534245 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.534253 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.534260 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.534267 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.534273 | orchestrator | 2025-06-02 14:23:38.534280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 14:23:38.534287 | orchestrator | Monday 02 June 2025 14:16:31 +0000 (0:00:01.180) 0:03:42.635 *********** 2025-06-02 14:23:38.534294 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534300 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534307 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534314 | orchestrator | 2025-06-02 14:23:38.534320 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 14:23:38.534327 | orchestrator | Monday 02 June 2025 14:16:31 +0000 (0:00:00.280) 0:03:42.916 *********** 2025-06-02 14:23:38.534334 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.534340 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.534347 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.534353 | orchestrator | 2025-06-02 14:23:38.534360 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 14:23:38.534367 | orchestrator | Monday 02 June 2025 14:16:32 +0000 (0:00:01.141) 0:03:44.057 *********** 2025-06-02 14:23:38.534373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.534380 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.534387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.534394 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534400 | orchestrator | 2025-06-02 14:23:38.534412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 14:23:38.534419 | orchestrator | Monday 02 June 2025 14:16:33 +0000 (0:00:00.768) 0:03:44.826 *********** 2025-06-02 14:23:38.534426 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534432 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534439 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534446 | orchestrator | 2025-06-02 14:23:38.534453 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 14:23:38.534459 | orchestrator | 2025-06-02 14:23:38.534466 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.534473 | orchestrator | Monday 02 June 2025 14:16:34 +0000 (0:00:00.662) 0:03:45.488 *********** 2025-06-02 14:23:38.534479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.534486 | orchestrator | 2025-06-02 14:23:38.534493 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.534500 | orchestrator | Monday 02 June 2025 14:16:34 +0000 (0:00:00.454) 0:03:45.942 *********** 2025-06-02 14:23:38.534506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.534513 | orchestrator | 2025-06-02 14:23:38.534520 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.534526 | orchestrator | Monday 02 June 2025 14:16:35 +0000 (0:00:00.643) 0:03:46.586 *********** 2025-06-02 14:23:38.534533 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534540 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534546 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534553 | orchestrator | 2025-06-02 14:23:38.534559 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.534566 | orchestrator | Monday 02 June 2025 14:16:36 +0000 (0:00:00.733) 0:03:47.320 *********** 2025-06-02 14:23:38.534577 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534584 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534590 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534597 | orchestrator | 2025-06-02 14:23:38.534604 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.534610 | orchestrator | Monday 02 June 2025 14:16:36 +0000 (0:00:00.287) 0:03:47.608 *********** 2025-06-02 14:23:38.534617 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534624 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534630 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534637 | orchestrator | 2025-06-02 14:23:38.534680 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.534688 | orchestrator | Monday 02 June 2025 14:16:36 +0000 (0:00:00.276) 0:03:47.885 *********** 2025-06-02 14:23:38.534694 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534701 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534708 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534715 | orchestrator | 2025-06-02 14:23:38.534721 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.534728 | orchestrator | Monday 02 June 2025 14:16:37 +0000 (0:00:00.633) 0:03:48.519 *********** 2025-06-02 14:23:38.534735 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534741 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534748 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534755 | orchestrator | 2025-06-02 14:23:38.534762 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.534768 | orchestrator | Monday 02 June 2025 14:16:38 +0000 (0:00:00.745) 0:03:49.264 *********** 2025-06-02 14:23:38.534775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534782 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534788 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534795 | orchestrator | 2025-06-02 14:23:38.534802 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.534814 | orchestrator | Monday 02 June 2025 14:16:38 +0000 (0:00:00.330) 0:03:49.595 *********** 2025-06-02 14:23:38.534821 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534827 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534834 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534841 | orchestrator | 2025-06-02 14:23:38.534848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.534878 | orchestrator | Monday 02 June 2025 14:16:38 +0000 (0:00:00.327) 0:03:49.923 *********** 2025-06-02 14:23:38.534886 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534892 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534899 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534906 | orchestrator | 2025-06-02 14:23:38.534912 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.534919 | orchestrator | Monday 02 June 2025 14:16:40 +0000 (0:00:01.198) 0:03:51.121 *********** 2025-06-02 14:23:38.534925 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.534932 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.534939 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.534945 | orchestrator | 2025-06-02 14:23:38.534952 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.534959 | orchestrator | Monday 02 June 2025 14:16:40 +0000 (0:00:00.716) 0:03:51.837 *********** 2025-06-02 14:23:38.534965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.534972 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.534978 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.534985 | orchestrator | 2025-06-02 14:23:38.534991 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.534997 | orchestrator | Monday 02 June 2025 14:16:41 +0000 (0:00:00.296) 0:03:52.134 *********** 2025-06-02 14:23:38.535003 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535009 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535015 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535021 | orchestrator | 2025-06-02 14:23:38.535027 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.535033 | orchestrator | Monday 02 June 2025 14:16:41 +0000 (0:00:00.330) 0:03:52.465 *********** 2025-06-02 14:23:38.535040 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535046 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.535052 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.535058 | orchestrator | 2025-06-02 14:23:38.535064 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.535070 | orchestrator | Monday 02 June 2025 14:16:41 +0000 (0:00:00.473) 0:03:52.938 *********** 2025-06-02 14:23:38.535076 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535082 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.535088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.535095 | orchestrator | 2025-06-02 14:23:38.535101 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.535107 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:00.281) 0:03:53.220 *********** 2025-06-02 14:23:38.535113 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535119 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.535125 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.535131 | orchestrator | 2025-06-02 14:23:38.535137 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.535144 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:00.312) 0:03:53.533 *********** 2025-06-02 14:23:38.535150 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535156 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.535162 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.535168 | orchestrator | 2025-06-02 14:23:38.535174 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.535185 | orchestrator | Monday 02 June 2025 14:16:42 +0000 (0:00:00.267) 0:03:53.800 *********** 2025-06-02 14:23:38.535191 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.535203 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.535209 | orchestrator | 2025-06-02 14:23:38.535215 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.535221 | orchestrator | Monday 02 June 2025 14:16:43 +0000 (0:00:00.452) 0:03:54.253 *********** 2025-06-02 14:23:38.535231 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535237 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535243 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535249 | orchestrator | 2025-06-02 14:23:38.535256 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.535262 | orchestrator | Monday 02 June 2025 14:16:43 +0000 (0:00:00.325) 0:03:54.579 *********** 2025-06-02 14:23:38.535268 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535274 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535280 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535287 | orchestrator | 2025-06-02 14:23:38.535293 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.535299 | orchestrator | Monday 02 June 2025 14:16:43 +0000 (0:00:00.298) 0:03:54.877 *********** 2025-06-02 14:23:38.535305 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535311 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535317 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535323 | orchestrator | 2025-06-02 14:23:38.535330 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 14:23:38.535336 | orchestrator | Monday 02 June 2025 14:16:44 +0000 (0:00:00.621) 0:03:55.499 *********** 2025-06-02 14:23:38.535342 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535348 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535354 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535360 | orchestrator | 2025-06-02 14:23:38.535366 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 14:23:38.535373 | orchestrator | Monday 02 June 2025 14:16:44 +0000 (0:00:00.316) 0:03:55.815 *********** 2025-06-02 14:23:38.535379 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.535385 | orchestrator | 2025-06-02 14:23:38.535391 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 14:23:38.535398 | orchestrator | Monday 02 June 2025 14:16:45 +0000 (0:00:00.515) 0:03:56.331 *********** 2025-06-02 14:23:38.535404 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.535410 | orchestrator | 2025-06-02 14:23:38.535416 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 14:23:38.535422 | orchestrator | Monday 02 June 2025 14:16:45 +0000 (0:00:00.134) 0:03:56.465 *********** 2025-06-02 14:23:38.535428 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 14:23:38.535434 | orchestrator | 2025-06-02 14:23:38.535460 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 14:23:38.535466 | orchestrator | Monday 02 June 2025 14:16:46 +0000 (0:00:01.293) 0:03:57.758 *********** 2025-06-02 14:23:38.535473 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535479 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535485 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535491 | orchestrator | 2025-06-02 14:23:38.535497 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 14:23:38.535503 | orchestrator | Monday 02 June 2025 14:16:47 +0000 (0:00:00.327) 0:03:58.086 *********** 2025-06-02 14:23:38.535510 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535516 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535522 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535528 | orchestrator | 2025-06-02 14:23:38.535534 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 14:23:38.535545 | orchestrator | Monday 02 June 2025 14:16:47 +0000 (0:00:00.317) 0:03:58.403 *********** 2025-06-02 14:23:38.535551 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.535557 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535563 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.535569 | orchestrator | 2025-06-02 14:23:38.535576 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 14:23:38.535582 | orchestrator | Monday 02 June 2025 14:16:48 +0000 (0:00:01.218) 0:03:59.622 *********** 2025-06-02 14:23:38.535588 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535594 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.535600 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.535606 | orchestrator | 2025-06-02 14:23:38.535613 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 14:23:38.535619 | orchestrator | Monday 02 June 2025 14:16:49 +0000 (0:00:00.978) 0:04:00.600 *********** 2025-06-02 14:23:38.535625 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535631 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.535637 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.535659 | orchestrator | 2025-06-02 14:23:38.535665 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 14:23:38.535672 | orchestrator | Monday 02 June 2025 14:16:50 +0000 (0:00:00.713) 0:04:01.313 *********** 2025-06-02 14:23:38.535678 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535684 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535690 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535696 | orchestrator | 2025-06-02 14:23:38.535703 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 14:23:38.535709 | orchestrator | Monday 02 June 2025 14:16:50 +0000 (0:00:00.740) 0:04:02.054 *********** 2025-06-02 14:23:38.535715 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535721 | orchestrator | 2025-06-02 14:23:38.535728 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 14:23:38.535734 | orchestrator | Monday 02 June 2025 14:16:52 +0000 (0:00:01.200) 0:04:03.254 *********** 2025-06-02 14:23:38.535740 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535746 | orchestrator | 2025-06-02 14:23:38.535752 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 14:23:38.535759 | orchestrator | Monday 02 June 2025 14:16:52 +0000 (0:00:00.749) 0:04:04.004 *********** 2025-06-02 14:23:38.535765 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.535771 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.535778 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.535784 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:23:38.535791 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 14:23:38.535803 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:23:38.535809 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:23:38.535816 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 14:23:38.535822 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:23:38.535828 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 14:23:38.535834 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 14:23:38.535841 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 14:23:38.535847 | orchestrator | 2025-06-02 14:23:38.535853 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 14:23:38.535859 | orchestrator | Monday 02 June 2025 14:16:56 +0000 (0:00:03.497) 0:04:07.502 *********** 2025-06-02 14:23:38.535866 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535872 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.535878 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.535889 | orchestrator | 2025-06-02 14:23:38.535896 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 14:23:38.535902 | orchestrator | Monday 02 June 2025 14:16:57 +0000 (0:00:01.502) 0:04:09.004 *********** 2025-06-02 14:23:38.535908 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535914 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535920 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535927 | orchestrator | 2025-06-02 14:23:38.535933 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 14:23:38.535939 | orchestrator | Monday 02 June 2025 14:16:58 +0000 (0:00:00.333) 0:04:09.337 *********** 2025-06-02 14:23:38.535945 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.535952 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.535958 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.535964 | orchestrator | 2025-06-02 14:23:38.535970 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 14:23:38.535976 | orchestrator | Monday 02 June 2025 14:16:58 +0000 (0:00:00.320) 0:04:09.657 *********** 2025-06-02 14:23:38.535983 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.535989 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.535995 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536001 | orchestrator | 2025-06-02 14:23:38.536008 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 14:23:38.536033 | orchestrator | Monday 02 June 2025 14:17:00 +0000 (0:00:01.755) 0:04:11.413 *********** 2025-06-02 14:23:38.536041 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.536047 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.536053 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536059 | orchestrator | 2025-06-02 14:23:38.536065 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 14:23:38.536071 | orchestrator | Monday 02 June 2025 14:17:02 +0000 (0:00:01.715) 0:04:13.128 *********** 2025-06-02 14:23:38.536077 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536083 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536089 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536095 | orchestrator | 2025-06-02 14:23:38.536102 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 14:23:38.536108 | orchestrator | Monday 02 June 2025 14:17:02 +0000 (0:00:00.320) 0:04:13.448 *********** 2025-06-02 14:23:38.536114 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536120 | orchestrator | 2025-06-02 14:23:38.536126 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 14:23:38.536132 | orchestrator | Monday 02 June 2025 14:17:02 +0000 (0:00:00.528) 0:04:13.976 *********** 2025-06-02 14:23:38.536138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536144 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536150 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536156 | orchestrator | 2025-06-02 14:23:38.536162 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 14:23:38.536169 | orchestrator | Monday 02 June 2025 14:17:03 +0000 (0:00:00.596) 0:04:14.573 *********** 2025-06-02 14:23:38.536175 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536181 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536187 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536193 | orchestrator | 2025-06-02 14:23:38.536199 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 14:23:38.536205 | orchestrator | Monday 02 June 2025 14:17:03 +0000 (0:00:00.339) 0:04:14.913 *********** 2025-06-02 14:23:38.536211 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536217 | orchestrator | 2025-06-02 14:23:38.536223 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 14:23:38.536230 | orchestrator | Monday 02 June 2025 14:17:04 +0000 (0:00:00.550) 0:04:15.463 *********** 2025-06-02 14:23:38.536240 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.536246 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.536253 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536259 | orchestrator | 2025-06-02 14:23:38.536265 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 14:23:38.536271 | orchestrator | Monday 02 June 2025 14:17:06 +0000 (0:00:01.947) 0:04:17.411 *********** 2025-06-02 14:23:38.536277 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.536283 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.536289 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536295 | orchestrator | 2025-06-02 14:23:38.536301 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 14:23:38.536307 | orchestrator | Monday 02 June 2025 14:17:07 +0000 (0:00:01.234) 0:04:18.645 *********** 2025-06-02 14:23:38.536313 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.536319 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.536325 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536331 | orchestrator | 2025-06-02 14:23:38.536338 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 14:23:38.536347 | orchestrator | Monday 02 June 2025 14:17:09 +0000 (0:00:01.900) 0:04:20.546 *********** 2025-06-02 14:23:38.536353 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.536359 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.536366 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.536372 | orchestrator | 2025-06-02 14:23:38.536378 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 14:23:38.536384 | orchestrator | Monday 02 June 2025 14:17:11 +0000 (0:00:01.887) 0:04:22.433 *********** 2025-06-02 14:23:38.536390 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536396 | orchestrator | 2025-06-02 14:23:38.536403 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 14:23:38.536409 | orchestrator | Monday 02 June 2025 14:17:12 +0000 (0:00:00.829) 0:04:23.263 *********** 2025-06-02 14:23:38.536415 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.536421 | orchestrator | 2025-06-02 14:23:38.536427 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 14:23:38.536433 | orchestrator | Monday 02 June 2025 14:17:13 +0000 (0:00:01.158) 0:04:24.421 *********** 2025-06-02 14:23:38.536439 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.536445 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.536452 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.536458 | orchestrator | 2025-06-02 14:23:38.536464 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 14:23:38.536470 | orchestrator | Monday 02 June 2025 14:17:23 +0000 (0:00:09.754) 0:04:34.176 *********** 2025-06-02 14:23:38.536476 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536488 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536494 | orchestrator | 2025-06-02 14:23:38.536500 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 14:23:38.536506 | orchestrator | Monday 02 June 2025 14:17:23 +0000 (0:00:00.348) 0:04:34.525 *********** 2025-06-02 14:23:38.536533 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 14:23:38.536544 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 14:23:38.536555 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 14:23:38.536563 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 14:23:38.536570 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 14:23:38.536576 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__7c2fa653aaafc3915148b323ec08a9cbfa03de7d'}])  2025-06-02 14:23:38.536584 | orchestrator | 2025-06-02 14:23:38.536591 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.536597 | orchestrator | Monday 02 June 2025 14:17:38 +0000 (0:00:14.816) 0:04:49.341 *********** 2025-06-02 14:23:38.536603 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536621 | orchestrator | 2025-06-02 14:23:38.536627 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 14:23:38.536633 | orchestrator | Monday 02 June 2025 14:17:38 +0000 (0:00:00.388) 0:04:49.730 *********** 2025-06-02 14:23:38.536659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536665 | orchestrator | 2025-06-02 14:23:38.536672 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 14:23:38.536678 | orchestrator | Monday 02 June 2025 14:17:39 +0000 (0:00:00.798) 0:04:50.528 *********** 2025-06-02 14:23:38.536684 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.536690 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.536696 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.536702 | orchestrator | 2025-06-02 14:23:38.536709 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 14:23:38.536715 | orchestrator | Monday 02 June 2025 14:17:39 +0000 (0:00:00.341) 0:04:50.870 *********** 2025-06-02 14:23:38.536721 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536727 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536733 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536740 | orchestrator | 2025-06-02 14:23:38.536746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 14:23:38.536752 | orchestrator | Monday 02 June 2025 14:17:40 +0000 (0:00:00.428) 0:04:51.298 *********** 2025-06-02 14:23:38.536758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.536764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.536771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.536782 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536788 | orchestrator | 2025-06-02 14:23:38.536794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 14:23:38.536800 | orchestrator | Monday 02 June 2025 14:17:41 +0000 (0:00:00.842) 0:04:52.141 *********** 2025-06-02 14:23:38.536806 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.536813 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.536819 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.536825 | orchestrator | 2025-06-02 14:23:38.536831 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 14:23:38.536837 | orchestrator | 2025-06-02 14:23:38.536843 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.536850 | orchestrator | Monday 02 June 2025 14:17:41 +0000 (0:00:00.761) 0:04:52.902 *********** 2025-06-02 14:23:38.536874 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536881 | orchestrator | 2025-06-02 14:23:38.536888 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.536894 | orchestrator | Monday 02 June 2025 14:17:42 +0000 (0:00:00.450) 0:04:53.353 *********** 2025-06-02 14:23:38.536900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.536906 | orchestrator | 2025-06-02 14:23:38.536912 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.536919 | orchestrator | Monday 02 June 2025 14:17:42 +0000 (0:00:00.654) 0:04:54.007 *********** 2025-06-02 14:23:38.536925 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.536931 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.536937 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.536943 | orchestrator | 2025-06-02 14:23:38.536949 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.536955 | orchestrator | Monday 02 June 2025 14:17:43 +0000 (0:00:00.673) 0:04:54.681 *********** 2025-06-02 14:23:38.536962 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.536968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.536974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.536980 | orchestrator | 2025-06-02 14:23:38.536986 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.536992 | orchestrator | Monday 02 June 2025 14:17:43 +0000 (0:00:00.272) 0:04:54.953 *********** 2025-06-02 14:23:38.536998 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537005 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537011 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537017 | orchestrator | 2025-06-02 14:23:38.537023 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.537029 | orchestrator | Monday 02 June 2025 14:17:44 +0000 (0:00:00.476) 0:04:55.430 *********** 2025-06-02 14:23:38.537035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537041 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537047 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537053 | orchestrator | 2025-06-02 14:23:38.537059 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.537066 | orchestrator | Monday 02 June 2025 14:17:44 +0000 (0:00:00.321) 0:04:55.752 *********** 2025-06-02 14:23:38.537072 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537078 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537084 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537090 | orchestrator | 2025-06-02 14:23:38.537097 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.537103 | orchestrator | Monday 02 June 2025 14:17:45 +0000 (0:00:00.706) 0:04:56.458 *********** 2025-06-02 14:23:38.537109 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537125 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537131 | orchestrator | 2025-06-02 14:23:38.537138 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.537144 | orchestrator | Monday 02 June 2025 14:17:45 +0000 (0:00:00.311) 0:04:56.769 *********** 2025-06-02 14:23:38.537150 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537156 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537162 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537168 | orchestrator | 2025-06-02 14:23:38.537175 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.537181 | orchestrator | Monday 02 June 2025 14:17:46 +0000 (0:00:00.576) 0:04:57.346 *********** 2025-06-02 14:23:38.537187 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537193 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537203 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537209 | orchestrator | 2025-06-02 14:23:38.537215 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.537222 | orchestrator | Monday 02 June 2025 14:17:47 +0000 (0:00:00.863) 0:04:58.209 *********** 2025-06-02 14:23:38.537228 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537234 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537240 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537246 | orchestrator | 2025-06-02 14:23:38.537252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.537259 | orchestrator | Monday 02 June 2025 14:17:48 +0000 (0:00:01.009) 0:04:59.219 *********** 2025-06-02 14:23:38.537265 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537271 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537283 | orchestrator | 2025-06-02 14:23:38.537289 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.537295 | orchestrator | Monday 02 June 2025 14:17:48 +0000 (0:00:00.429) 0:04:59.648 *********** 2025-06-02 14:23:38.537302 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537308 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537314 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537320 | orchestrator | 2025-06-02 14:23:38.537326 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.537332 | orchestrator | Monday 02 June 2025 14:17:49 +0000 (0:00:00.744) 0:05:00.393 *********** 2025-06-02 14:23:38.537339 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537357 | orchestrator | 2025-06-02 14:23:38.537363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.537369 | orchestrator | Monday 02 June 2025 14:17:49 +0000 (0:00:00.418) 0:05:00.812 *********** 2025-06-02 14:23:38.537375 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537394 | orchestrator | 2025-06-02 14:23:38.537400 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.537406 | orchestrator | Monday 02 June 2025 14:17:50 +0000 (0:00:00.521) 0:05:01.333 *********** 2025-06-02 14:23:38.537429 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537436 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537448 | orchestrator | 2025-06-02 14:23:38.537454 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.537460 | orchestrator | Monday 02 June 2025 14:17:50 +0000 (0:00:00.368) 0:05:01.702 *********** 2025-06-02 14:23:38.537467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537473 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537485 | orchestrator | 2025-06-02 14:23:38.537496 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.537502 | orchestrator | Monday 02 June 2025 14:17:51 +0000 (0:00:00.503) 0:05:02.206 *********** 2025-06-02 14:23:38.537508 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537515 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537521 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537527 | orchestrator | 2025-06-02 14:23:38.537533 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.537539 | orchestrator | Monday 02 June 2025 14:17:51 +0000 (0:00:00.305) 0:05:02.512 *********** 2025-06-02 14:23:38.537546 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537552 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537558 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537564 | orchestrator | 2025-06-02 14:23:38.537570 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.537576 | orchestrator | Monday 02 June 2025 14:17:51 +0000 (0:00:00.336) 0:05:02.848 *********** 2025-06-02 14:23:38.537582 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537589 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537595 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537601 | orchestrator | 2025-06-02 14:23:38.537607 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.537613 | orchestrator | Monday 02 June 2025 14:17:52 +0000 (0:00:00.339) 0:05:03.188 *********** 2025-06-02 14:23:38.537619 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537625 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537631 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537637 | orchestrator | 2025-06-02 14:23:38.537674 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 14:23:38.537681 | orchestrator | Monday 02 June 2025 14:17:52 +0000 (0:00:00.646) 0:05:03.834 *********** 2025-06-02 14:23:38.537687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:23:38.537693 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.537700 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.537706 | orchestrator | 2025-06-02 14:23:38.537712 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 14:23:38.537718 | orchestrator | Monday 02 June 2025 14:17:53 +0000 (0:00:00.483) 0:05:04.318 *********** 2025-06-02 14:23:38.537725 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.537731 | orchestrator | 2025-06-02 14:23:38.537737 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 14:23:38.537743 | orchestrator | Monday 02 June 2025 14:17:53 +0000 (0:00:00.464) 0:05:04.783 *********** 2025-06-02 14:23:38.537750 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.537756 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.537762 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.537768 | orchestrator | 2025-06-02 14:23:38.537774 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 14:23:38.537785 | orchestrator | Monday 02 June 2025 14:17:54 +0000 (0:00:00.884) 0:05:05.667 *********** 2025-06-02 14:23:38.537791 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.537796 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.537801 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.537807 | orchestrator | 2025-06-02 14:23:38.537812 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 14:23:38.537818 | orchestrator | Monday 02 June 2025 14:17:54 +0000 (0:00:00.321) 0:05:05.989 *********** 2025-06-02 14:23:38.537823 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.537829 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.537834 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.537844 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 14:23:38.537849 | orchestrator | 2025-06-02 14:23:38.537855 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 14:23:38.537860 | orchestrator | Monday 02 June 2025 14:18:04 +0000 (0:00:09.962) 0:05:15.952 *********** 2025-06-02 14:23:38.537866 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.537871 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.537877 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.537882 | orchestrator | 2025-06-02 14:23:38.537888 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 14:23:38.537893 | orchestrator | Monday 02 June 2025 14:18:05 +0000 (0:00:00.556) 0:05:16.509 *********** 2025-06-02 14:23:38.537899 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 14:23:38.537904 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 14:23:38.537910 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 14:23:38.537915 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.537920 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.537926 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.537931 | orchestrator | 2025-06-02 14:23:38.537937 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 14:23:38.537942 | orchestrator | Monday 02 June 2025 14:18:08 +0000 (0:00:02.645) 0:05:19.154 *********** 2025-06-02 14:23:38.537948 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 14:23:38.537953 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 14:23:38.537976 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 14:23:38.537982 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 14:23:38.537988 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 14:23:38.537993 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 14:23:38.537998 | orchestrator | 2025-06-02 14:23:38.538004 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 14:23:38.538009 | orchestrator | Monday 02 June 2025 14:18:09 +0000 (0:00:01.545) 0:05:20.700 *********** 2025-06-02 14:23:38.538037 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.538043 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.538048 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.538054 | orchestrator | 2025-06-02 14:23:38.538059 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 14:23:38.538065 | orchestrator | Monday 02 June 2025 14:18:10 +0000 (0:00:00.751) 0:05:21.451 *********** 2025-06-02 14:23:38.538070 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538076 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.538081 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.538086 | orchestrator | 2025-06-02 14:23:38.538092 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 14:23:38.538097 | orchestrator | Monday 02 June 2025 14:18:10 +0000 (0:00:00.330) 0:05:21.782 *********** 2025-06-02 14:23:38.538103 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538108 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.538114 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.538119 | orchestrator | 2025-06-02 14:23:38.538124 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 14:23:38.538130 | orchestrator | Monday 02 June 2025 14:18:10 +0000 (0:00:00.285) 0:05:22.067 *********** 2025-06-02 14:23:38.538135 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.538141 | orchestrator | 2025-06-02 14:23:38.538146 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 14:23:38.538152 | orchestrator | Monday 02 June 2025 14:18:11 +0000 (0:00:00.807) 0:05:22.874 *********** 2025-06-02 14:23:38.538157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538167 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.538172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.538178 | orchestrator | 2025-06-02 14:23:38.538183 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 14:23:38.538188 | orchestrator | Monday 02 June 2025 14:18:12 +0000 (0:00:00.331) 0:05:23.206 *********** 2025-06-02 14:23:38.538194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538199 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.538205 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.538210 | orchestrator | 2025-06-02 14:23:38.538216 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 14:23:38.538221 | orchestrator | Monday 02 June 2025 14:18:12 +0000 (0:00:00.326) 0:05:23.533 *********** 2025-06-02 14:23:38.538227 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.538232 | orchestrator | 2025-06-02 14:23:38.538238 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 14:23:38.538243 | orchestrator | Monday 02 June 2025 14:18:13 +0000 (0:00:00.764) 0:05:24.297 *********** 2025-06-02 14:23:38.538248 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538254 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538259 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538265 | orchestrator | 2025-06-02 14:23:38.538270 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 14:23:38.538279 | orchestrator | Monday 02 June 2025 14:18:14 +0000 (0:00:01.251) 0:05:25.549 *********** 2025-06-02 14:23:38.538285 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538290 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538296 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538301 | orchestrator | 2025-06-02 14:23:38.538307 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 14:23:38.538312 | orchestrator | Monday 02 June 2025 14:18:15 +0000 (0:00:01.157) 0:05:26.706 *********** 2025-06-02 14:23:38.538317 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538323 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538328 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538334 | orchestrator | 2025-06-02 14:23:38.538339 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 14:23:38.538345 | orchestrator | Monday 02 June 2025 14:18:18 +0000 (0:00:02.550) 0:05:29.256 *********** 2025-06-02 14:23:38.538350 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538356 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538361 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538366 | orchestrator | 2025-06-02 14:23:38.538372 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 14:23:38.538377 | orchestrator | Monday 02 June 2025 14:18:20 +0000 (0:00:01.901) 0:05:31.158 *********** 2025-06-02 14:23:38.538383 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538388 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.538394 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 14:23:38.538399 | orchestrator | 2025-06-02 14:23:38.538405 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 14:23:38.538410 | orchestrator | Monday 02 June 2025 14:18:20 +0000 (0:00:00.427) 0:05:31.586 *********** 2025-06-02 14:23:38.538416 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 14:23:38.538421 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 14:23:38.538427 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 14:23:38.538449 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 14:23:38.538540 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 14:23:38.538546 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.538551 | orchestrator | 2025-06-02 14:23:38.538557 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 14:23:38.538563 | orchestrator | Monday 02 June 2025 14:18:50 +0000 (0:00:29.976) 0:06:01.562 *********** 2025-06-02 14:23:38.538568 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.538574 | orchestrator | 2025-06-02 14:23:38.538579 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 14:23:38.538585 | orchestrator | Monday 02 June 2025 14:18:52 +0000 (0:00:01.581) 0:06:03.144 *********** 2025-06-02 14:23:38.538590 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.538595 | orchestrator | 2025-06-02 14:23:38.538601 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 14:23:38.538606 | orchestrator | Monday 02 June 2025 14:18:52 +0000 (0:00:00.894) 0:06:04.038 *********** 2025-06-02 14:23:38.538611 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.538617 | orchestrator | 2025-06-02 14:23:38.538622 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 14:23:38.538627 | orchestrator | Monday 02 June 2025 14:18:53 +0000 (0:00:00.156) 0:06:04.195 *********** 2025-06-02 14:23:38.538633 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 14:23:38.538638 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 14:23:38.538654 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 14:23:38.538659 | orchestrator | 2025-06-02 14:23:38.538665 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 14:23:38.538670 | orchestrator | Monday 02 June 2025 14:18:59 +0000 (0:00:06.379) 0:06:10.574 *********** 2025-06-02 14:23:38.538676 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 14:23:38.538681 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 14:23:38.538687 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 14:23:38.538692 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 14:23:38.538697 | orchestrator | 2025-06-02 14:23:38.538703 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.538708 | orchestrator | Monday 02 June 2025 14:19:04 +0000 (0:00:04.610) 0:06:15.184 *********** 2025-06-02 14:23:38.538714 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538719 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538724 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538730 | orchestrator | 2025-06-02 14:23:38.538735 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 14:23:38.538741 | orchestrator | Monday 02 June 2025 14:19:05 +0000 (0:00:00.943) 0:06:16.128 *********** 2025-06-02 14:23:38.538746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:23:38.538751 | orchestrator | 2025-06-02 14:23:38.538757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 14:23:38.538762 | orchestrator | Monday 02 June 2025 14:19:05 +0000 (0:00:00.569) 0:06:16.697 *********** 2025-06-02 14:23:38.538768 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.538779 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.538784 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.538790 | orchestrator | 2025-06-02 14:23:38.538795 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 14:23:38.538801 | orchestrator | Monday 02 June 2025 14:19:05 +0000 (0:00:00.362) 0:06:17.060 *********** 2025-06-02 14:23:38.538806 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.538812 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.538822 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.538827 | orchestrator | 2025-06-02 14:23:38.538832 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 14:23:38.538838 | orchestrator | Monday 02 June 2025 14:19:07 +0000 (0:00:01.782) 0:06:18.842 *********** 2025-06-02 14:23:38.538843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 14:23:38.538849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 14:23:38.538854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 14:23:38.538860 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.538865 | orchestrator | 2025-06-02 14:23:38.538871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 14:23:38.538876 | orchestrator | Monday 02 June 2025 14:19:08 +0000 (0:00:00.694) 0:06:19.537 *********** 2025-06-02 14:23:38.538881 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.538887 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.538892 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.538898 | orchestrator | 2025-06-02 14:23:38.538903 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 14:23:38.538909 | orchestrator | 2025-06-02 14:23:38.538914 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.538919 | orchestrator | Monday 02 June 2025 14:19:09 +0000 (0:00:00.588) 0:06:20.125 *********** 2025-06-02 14:23:38.538925 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.538931 | orchestrator | 2025-06-02 14:23:38.538936 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.538941 | orchestrator | Monday 02 June 2025 14:19:09 +0000 (0:00:00.736) 0:06:20.861 *********** 2025-06-02 14:23:38.538965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.538972 | orchestrator | 2025-06-02 14:23:38.538978 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.538983 | orchestrator | Monday 02 June 2025 14:19:10 +0000 (0:00:00.554) 0:06:21.416 *********** 2025-06-02 14:23:38.538989 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.538994 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.538999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539005 | orchestrator | 2025-06-02 14:23:38.539010 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.539016 | orchestrator | Monday 02 June 2025 14:19:10 +0000 (0:00:00.319) 0:06:21.735 *********** 2025-06-02 14:23:38.539021 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539026 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539032 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539037 | orchestrator | 2025-06-02 14:23:38.539042 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.539048 | orchestrator | Monday 02 June 2025 14:19:11 +0000 (0:00:00.988) 0:06:22.724 *********** 2025-06-02 14:23:38.539053 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539058 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539064 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539069 | orchestrator | 2025-06-02 14:23:38.539075 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.539080 | orchestrator | Monday 02 June 2025 14:19:12 +0000 (0:00:00.688) 0:06:23.413 *********** 2025-06-02 14:23:38.539086 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539091 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539096 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539101 | orchestrator | 2025-06-02 14:23:38.539107 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.539112 | orchestrator | Monday 02 June 2025 14:19:13 +0000 (0:00:00.672) 0:06:24.085 *********** 2025-06-02 14:23:38.539122 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539128 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539133 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539138 | orchestrator | 2025-06-02 14:23:38.539144 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.539149 | orchestrator | Monday 02 June 2025 14:19:13 +0000 (0:00:00.314) 0:06:24.399 *********** 2025-06-02 14:23:38.539155 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539160 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539171 | orchestrator | 2025-06-02 14:23:38.539176 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.539182 | orchestrator | Monday 02 June 2025 14:19:13 +0000 (0:00:00.596) 0:06:24.995 *********** 2025-06-02 14:23:38.539187 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539192 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539198 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539203 | orchestrator | 2025-06-02 14:23:38.539209 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.539214 | orchestrator | Monday 02 June 2025 14:19:14 +0000 (0:00:00.369) 0:06:25.365 *********** 2025-06-02 14:23:38.539219 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539225 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539230 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539235 | orchestrator | 2025-06-02 14:23:38.539241 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.539246 | orchestrator | Monday 02 June 2025 14:19:14 +0000 (0:00:00.681) 0:06:26.046 *********** 2025-06-02 14:23:38.539252 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539257 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539262 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539268 | orchestrator | 2025-06-02 14:23:38.539276 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.539282 | orchestrator | Monday 02 June 2025 14:19:15 +0000 (0:00:00.677) 0:06:26.724 *********** 2025-06-02 14:23:38.539287 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539293 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539298 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539304 | orchestrator | 2025-06-02 14:23:38.539309 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.539314 | orchestrator | Monday 02 June 2025 14:19:16 +0000 (0:00:00.559) 0:06:27.284 *********** 2025-06-02 14:23:38.539320 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539325 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539331 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539336 | orchestrator | 2025-06-02 14:23:38.539341 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.539346 | orchestrator | Monday 02 June 2025 14:19:16 +0000 (0:00:00.326) 0:06:27.610 *********** 2025-06-02 14:23:38.539352 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539357 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539363 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539368 | orchestrator | 2025-06-02 14:23:38.539373 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.539379 | orchestrator | Monday 02 June 2025 14:19:16 +0000 (0:00:00.405) 0:06:28.015 *********** 2025-06-02 14:23:38.539384 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539390 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539395 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539400 | orchestrator | 2025-06-02 14:23:38.539406 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.539411 | orchestrator | Monday 02 June 2025 14:19:17 +0000 (0:00:00.352) 0:06:28.368 *********** 2025-06-02 14:23:38.539416 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539422 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539431 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539436 | orchestrator | 2025-06-02 14:23:38.539442 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.539447 | orchestrator | Monday 02 June 2025 14:19:17 +0000 (0:00:00.595) 0:06:28.963 *********** 2025-06-02 14:23:38.539453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539458 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539464 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539469 | orchestrator | 2025-06-02 14:23:38.539477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.539482 | orchestrator | Monday 02 June 2025 14:19:18 +0000 (0:00:00.309) 0:06:29.273 *********** 2025-06-02 14:23:38.539488 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539493 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539499 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539504 | orchestrator | 2025-06-02 14:23:38.539510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.539515 | orchestrator | Monday 02 June 2025 14:19:18 +0000 (0:00:00.336) 0:06:29.609 *********** 2025-06-02 14:23:38.539520 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539526 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539531 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539537 | orchestrator | 2025-06-02 14:23:38.539542 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.539548 | orchestrator | Monday 02 June 2025 14:19:18 +0000 (0:00:00.307) 0:06:29.917 *********** 2025-06-02 14:23:38.539553 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539558 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539564 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539569 | orchestrator | 2025-06-02 14:23:38.539575 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.539580 | orchestrator | Monday 02 June 2025 14:19:19 +0000 (0:00:00.621) 0:06:30.538 *********** 2025-06-02 14:23:38.539586 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539591 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539596 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539602 | orchestrator | 2025-06-02 14:23:38.539607 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 14:23:38.539613 | orchestrator | Monday 02 June 2025 14:19:19 +0000 (0:00:00.535) 0:06:31.074 *********** 2025-06-02 14:23:38.539618 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539624 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539629 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539634 | orchestrator | 2025-06-02 14:23:38.539653 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 14:23:38.539659 | orchestrator | Monday 02 June 2025 14:19:20 +0000 (0:00:00.317) 0:06:31.392 *********** 2025-06-02 14:23:38.539664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 14:23:38.539670 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:23:38.539675 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:23:38.539680 | orchestrator | 2025-06-02 14:23:38.539686 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 14:23:38.539691 | orchestrator | Monday 02 June 2025 14:19:21 +0000 (0:00:00.936) 0:06:32.328 *********** 2025-06-02 14:23:38.539696 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.539702 | orchestrator | 2025-06-02 14:23:38.539707 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 14:23:38.539712 | orchestrator | Monday 02 June 2025 14:19:22 +0000 (0:00:00.761) 0:06:33.089 *********** 2025-06-02 14:23:38.539718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539723 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539733 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539738 | orchestrator | 2025-06-02 14:23:38.539744 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 14:23:38.539749 | orchestrator | Monday 02 June 2025 14:19:22 +0000 (0:00:00.323) 0:06:33.413 *********** 2025-06-02 14:23:38.539755 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539764 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539769 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539775 | orchestrator | 2025-06-02 14:23:38.539780 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 14:23:38.539785 | orchestrator | Monday 02 June 2025 14:19:22 +0000 (0:00:00.310) 0:06:33.723 *********** 2025-06-02 14:23:38.539791 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539796 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539801 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539807 | orchestrator | 2025-06-02 14:23:38.539812 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 14:23:38.539818 | orchestrator | Monday 02 June 2025 14:19:23 +0000 (0:00:00.936) 0:06:34.659 *********** 2025-06-02 14:23:38.539823 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.539828 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.539834 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.539839 | orchestrator | 2025-06-02 14:23:38.539844 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 14:23:38.539850 | orchestrator | Monday 02 June 2025 14:19:23 +0000 (0:00:00.345) 0:06:35.005 *********** 2025-06-02 14:23:38.539855 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 14:23:38.539861 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 14:23:38.539866 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 14:23:38.539872 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 14:23:38.539877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 14:23:38.539882 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 14:23:38.539888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 14:23:38.539893 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 14:23:38.539902 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 14:23:38.539908 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 14:23:38.539913 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 14:23:38.539919 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 14:23:38.539924 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 14:23:38.539929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 14:23:38.539935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 14:23:38.539940 | orchestrator | 2025-06-02 14:23:38.539946 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 14:23:38.539951 | orchestrator | Monday 02 June 2025 14:19:26 +0000 (0:00:03.017) 0:06:38.023 *********** 2025-06-02 14:23:38.539957 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.539962 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.539967 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.539973 | orchestrator | 2025-06-02 14:23:38.539978 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 14:23:38.539989 | orchestrator | Monday 02 June 2025 14:19:27 +0000 (0:00:00.301) 0:06:38.324 *********** 2025-06-02 14:23:38.539994 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.540000 | orchestrator | 2025-06-02 14:23:38.540005 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 14:23:38.540010 | orchestrator | Monday 02 June 2025 14:19:28 +0000 (0:00:00.832) 0:06:39.156 *********** 2025-06-02 14:23:38.540016 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 14:23:38.540021 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 14:23:38.540027 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 14:23:38.540032 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 14:23:38.540038 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 14:23:38.540043 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 14:23:38.540048 | orchestrator | 2025-06-02 14:23:38.540054 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 14:23:38.540059 | orchestrator | Monday 02 June 2025 14:19:29 +0000 (0:00:00.960) 0:06:40.116 *********** 2025-06-02 14:23:38.540065 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.540070 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.540076 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.540081 | orchestrator | 2025-06-02 14:23:38.540086 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 14:23:38.540092 | orchestrator | Monday 02 June 2025 14:19:30 +0000 (0:00:01.914) 0:06:42.031 *********** 2025-06-02 14:23:38.540097 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 14:23:38.540103 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.540108 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.540114 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 14:23:38.540119 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 14:23:38.540124 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.540133 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 14:23:38.540138 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 14:23:38.540143 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.540149 | orchestrator | 2025-06-02 14:23:38.540154 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 14:23:38.540160 | orchestrator | Monday 02 June 2025 14:19:32 +0000 (0:00:01.450) 0:06:43.481 *********** 2025-06-02 14:23:38.540165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.540171 | orchestrator | 2025-06-02 14:23:38.540176 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 14:23:38.540181 | orchestrator | Monday 02 June 2025 14:19:34 +0000 (0:00:01.881) 0:06:45.362 *********** 2025-06-02 14:23:38.540187 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.540192 | orchestrator | 2025-06-02 14:23:38.540197 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 14:23:38.540203 | orchestrator | Monday 02 June 2025 14:19:34 +0000 (0:00:00.562) 0:06:45.925 *********** 2025-06-02 14:23:38.540208 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d', 'data_vg': 'ceph-1475bed6-7ba6-5e8e-8ce2-217cc0c6359d'}) 2025-06-02 14:23:38.540214 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10', 'data_vg': 'ceph-a3b854b8-87a4-5f9e-b4c6-d99e1c5dbb10'}) 2025-06-02 14:23:38.540220 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-999978ba-f5e8-5970-b49f-3220d15259a2', 'data_vg': 'ceph-999978ba-f5e8-5970-b49f-3220d15259a2'}) 2025-06-02 14:23:38.540230 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c542c38e-2fd0-548c-8c9f-0ca498087064', 'data_vg': 'ceph-c542c38e-2fd0-548c-8c9f-0ca498087064'}) 2025-06-02 14:23:38.540238 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4eaa56f6-1bb5-52f9-9765-bc2816f621f7', 'data_vg': 'ceph-4eaa56f6-1bb5-52f9-9765-bc2816f621f7'}) 2025-06-02 14:23:38.540244 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bbf0c471-2dcf-5556-af63-e058f1325c4d', 'data_vg': 'ceph-bbf0c471-2dcf-5556-af63-e058f1325c4d'}) 2025-06-02 14:23:38.540249 | orchestrator | 2025-06-02 14:23:38.540255 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 14:23:38.540260 | orchestrator | Monday 02 June 2025 14:20:16 +0000 (0:00:41.876) 0:07:27.801 *********** 2025-06-02 14:23:38.540266 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540271 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540277 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540282 | orchestrator | 2025-06-02 14:23:38.540287 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 14:23:38.540293 | orchestrator | Monday 02 June 2025 14:20:17 +0000 (0:00:00.648) 0:07:28.450 *********** 2025-06-02 14:23:38.540298 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.540304 | orchestrator | 2025-06-02 14:23:38.540309 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 14:23:38.540314 | orchestrator | Monday 02 June 2025 14:20:17 +0000 (0:00:00.573) 0:07:29.024 *********** 2025-06-02 14:23:38.540320 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.540325 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.540331 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.540336 | orchestrator | 2025-06-02 14:23:38.540341 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 14:23:38.540347 | orchestrator | Monday 02 June 2025 14:20:18 +0000 (0:00:00.741) 0:07:29.765 *********** 2025-06-02 14:23:38.540352 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.540358 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.540363 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.540369 | orchestrator | 2025-06-02 14:23:38.540374 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 14:23:38.540380 | orchestrator | Monday 02 June 2025 14:20:21 +0000 (0:00:02.669) 0:07:32.434 *********** 2025-06-02 14:23:38.540385 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.540391 | orchestrator | 2025-06-02 14:23:38.540396 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 14:23:38.540401 | orchestrator | Monday 02 June 2025 14:20:21 +0000 (0:00:00.510) 0:07:32.945 *********** 2025-06-02 14:23:38.540407 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.540412 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.540418 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.540423 | orchestrator | 2025-06-02 14:23:38.540428 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 14:23:38.540434 | orchestrator | Monday 02 June 2025 14:20:23 +0000 (0:00:01.264) 0:07:34.209 *********** 2025-06-02 14:23:38.540439 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.540445 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.540450 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.540455 | orchestrator | 2025-06-02 14:23:38.540461 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 14:23:38.540466 | orchestrator | Monday 02 June 2025 14:20:24 +0000 (0:00:01.415) 0:07:35.625 *********** 2025-06-02 14:23:38.540472 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.540477 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.540482 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.540488 | orchestrator | 2025-06-02 14:23:38.540493 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 14:23:38.540506 | orchestrator | Monday 02 June 2025 14:20:26 +0000 (0:00:01.665) 0:07:37.290 *********** 2025-06-02 14:23:38.540511 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540517 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540522 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540528 | orchestrator | 2025-06-02 14:23:38.540533 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 14:23:38.540538 | orchestrator | Monday 02 June 2025 14:20:26 +0000 (0:00:00.357) 0:07:37.648 *********** 2025-06-02 14:23:38.540544 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540549 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540555 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540560 | orchestrator | 2025-06-02 14:23:38.540566 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 14:23:38.540571 | orchestrator | Monday 02 June 2025 14:20:26 +0000 (0:00:00.304) 0:07:37.953 *********** 2025-06-02 14:23:38.540576 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 14:23:38.540582 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-02 14:23:38.540587 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-02 14:23:38.540593 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-02 14:23:38.540598 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-02 14:23:38.540603 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-02 14:23:38.540609 | orchestrator | 2025-06-02 14:23:38.540614 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 14:23:38.540619 | orchestrator | Monday 02 June 2025 14:20:28 +0000 (0:00:01.289) 0:07:39.243 *********** 2025-06-02 14:23:38.540625 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 14:23:38.540630 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-02 14:23:38.540636 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 14:23:38.540652 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 14:23:38.540657 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 14:23:38.540662 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 14:23:38.540668 | orchestrator | 2025-06-02 14:23:38.540673 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 14:23:38.540679 | orchestrator | Monday 02 June 2025 14:20:30 +0000 (0:00:02.205) 0:07:41.449 *********** 2025-06-02 14:23:38.540684 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 14:23:38.540689 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-02 14:23:38.540697 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 14:23:38.540703 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 14:23:38.540708 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 14:23:38.540714 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 14:23:38.540719 | orchestrator | 2025-06-02 14:23:38.540724 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 14:23:38.540730 | orchestrator | Monday 02 June 2025 14:20:33 +0000 (0:00:03.548) 0:07:44.997 *********** 2025-06-02 14:23:38.540735 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540741 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.540751 | orchestrator | 2025-06-02 14:23:38.540757 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 14:23:38.540762 | orchestrator | Monday 02 June 2025 14:20:36 +0000 (0:00:02.166) 0:07:47.163 *********** 2025-06-02 14:23:38.540768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540773 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540778 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 14:23:38.540784 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.540789 | orchestrator | 2025-06-02 14:23:38.540799 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 14:23:38.540804 | orchestrator | Monday 02 June 2025 14:20:49 +0000 (0:00:13.022) 0:08:00.186 *********** 2025-06-02 14:23:38.540810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540815 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540821 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540826 | orchestrator | 2025-06-02 14:23:38.540831 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.540837 | orchestrator | Monday 02 June 2025 14:20:49 +0000 (0:00:00.808) 0:08:00.995 *********** 2025-06-02 14:23:38.540842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540848 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540853 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540858 | orchestrator | 2025-06-02 14:23:38.540864 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 14:23:38.540869 | orchestrator | Monday 02 June 2025 14:20:50 +0000 (0:00:00.636) 0:08:01.631 *********** 2025-06-02 14:23:38.540875 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.540880 | orchestrator | 2025-06-02 14:23:38.540885 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 14:23:38.540891 | orchestrator | Monday 02 June 2025 14:20:51 +0000 (0:00:00.558) 0:08:02.189 *********** 2025-06-02 14:23:38.540896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.540902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.540907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.540913 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540918 | orchestrator | 2025-06-02 14:23:38.540923 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 14:23:38.540929 | orchestrator | Monday 02 June 2025 14:20:51 +0000 (0:00:00.392) 0:08:02.581 *********** 2025-06-02 14:23:38.540934 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540940 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.540945 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.540950 | orchestrator | 2025-06-02 14:23:38.540956 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 14:23:38.540965 | orchestrator | Monday 02 June 2025 14:20:51 +0000 (0:00:00.290) 0:08:02.872 *********** 2025-06-02 14:23:38.540970 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540976 | orchestrator | 2025-06-02 14:23:38.540981 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 14:23:38.540986 | orchestrator | Monday 02 June 2025 14:20:52 +0000 (0:00:00.258) 0:08:03.131 *********** 2025-06-02 14:23:38.540992 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.540997 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541008 | orchestrator | 2025-06-02 14:23:38.541014 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 14:23:38.541019 | orchestrator | Monday 02 June 2025 14:20:52 +0000 (0:00:00.573) 0:08:03.704 *********** 2025-06-02 14:23:38.541024 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541030 | orchestrator | 2025-06-02 14:23:38.541035 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 14:23:38.541041 | orchestrator | Monday 02 June 2025 14:20:52 +0000 (0:00:00.241) 0:08:03.946 *********** 2025-06-02 14:23:38.541046 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541051 | orchestrator | 2025-06-02 14:23:38.541057 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 14:23:38.541062 | orchestrator | Monday 02 June 2025 14:20:53 +0000 (0:00:00.224) 0:08:04.170 *********** 2025-06-02 14:23:38.541068 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541073 | orchestrator | 2025-06-02 14:23:38.541083 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 14:23:38.541089 | orchestrator | Monday 02 June 2025 14:20:53 +0000 (0:00:00.125) 0:08:04.295 *********** 2025-06-02 14:23:38.541094 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541099 | orchestrator | 2025-06-02 14:23:38.541105 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 14:23:38.541110 | orchestrator | Monday 02 June 2025 14:20:53 +0000 (0:00:00.223) 0:08:04.519 *********** 2025-06-02 14:23:38.541116 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541121 | orchestrator | 2025-06-02 14:23:38.541126 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 14:23:38.541132 | orchestrator | Monday 02 June 2025 14:20:53 +0000 (0:00:00.220) 0:08:04.740 *********** 2025-06-02 14:23:38.541140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.541145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.541151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.541156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541162 | orchestrator | 2025-06-02 14:23:38.541167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 14:23:38.541172 | orchestrator | Monday 02 June 2025 14:20:54 +0000 (0:00:00.358) 0:08:05.099 *********** 2025-06-02 14:23:38.541178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541189 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541194 | orchestrator | 2025-06-02 14:23:38.541199 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 14:23:38.541205 | orchestrator | Monday 02 June 2025 14:20:54 +0000 (0:00:00.320) 0:08:05.420 *********** 2025-06-02 14:23:38.541210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541216 | orchestrator | 2025-06-02 14:23:38.541221 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 14:23:38.541227 | orchestrator | Monday 02 June 2025 14:20:55 +0000 (0:00:00.814) 0:08:06.234 *********** 2025-06-02 14:23:38.541232 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541237 | orchestrator | 2025-06-02 14:23:38.541243 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 14:23:38.541248 | orchestrator | 2025-06-02 14:23:38.541253 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.541259 | orchestrator | Monday 02 June 2025 14:20:55 +0000 (0:00:00.704) 0:08:06.938 *********** 2025-06-02 14:23:38.541264 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.541270 | orchestrator | 2025-06-02 14:23:38.541276 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.541281 | orchestrator | Monday 02 June 2025 14:20:57 +0000 (0:00:01.199) 0:08:08.137 *********** 2025-06-02 14:23:38.541287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.541292 | orchestrator | 2025-06-02 14:23:38.541298 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.541303 | orchestrator | Monday 02 June 2025 14:20:58 +0000 (0:00:01.304) 0:08:09.442 *********** 2025-06-02 14:23:38.541308 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541314 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.541319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541325 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.541330 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.541335 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541341 | orchestrator | 2025-06-02 14:23:38.541346 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.541357 | orchestrator | Monday 02 June 2025 14:20:59 +0000 (0:00:01.108) 0:08:10.551 *********** 2025-06-02 14:23:38.541363 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541368 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541374 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541379 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541385 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541390 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541396 | orchestrator | 2025-06-02 14:23:38.541401 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.541409 | orchestrator | Monday 02 June 2025 14:21:00 +0000 (0:00:01.100) 0:08:11.652 *********** 2025-06-02 14:23:38.541415 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541420 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541426 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541431 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541437 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541442 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541447 | orchestrator | 2025-06-02 14:23:38.541453 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.541458 | orchestrator | Monday 02 June 2025 14:21:01 +0000 (0:00:01.351) 0:08:13.003 *********** 2025-06-02 14:23:38.541464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541469 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541474 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541480 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541485 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541490 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541496 | orchestrator | 2025-06-02 14:23:38.541501 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.541507 | orchestrator | Monday 02 June 2025 14:21:02 +0000 (0:00:01.071) 0:08:14.075 *********** 2025-06-02 14:23:38.541512 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541517 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.541523 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541528 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.541534 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.541539 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541544 | orchestrator | 2025-06-02 14:23:38.541550 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.541555 | orchestrator | Monday 02 June 2025 14:21:04 +0000 (0:00:01.028) 0:08:15.104 *********** 2025-06-02 14:23:38.541561 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541577 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541582 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541588 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541593 | orchestrator | 2025-06-02 14:23:38.541599 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.541604 | orchestrator | Monday 02 June 2025 14:21:04 +0000 (0:00:00.627) 0:08:15.732 *********** 2025-06-02 14:23:38.541612 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541618 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541623 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541667 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541674 | orchestrator | 2025-06-02 14:23:38.541679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.541685 | orchestrator | Monday 02 June 2025 14:21:05 +0000 (0:00:00.880) 0:08:16.612 *********** 2025-06-02 14:23:38.541690 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.541695 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.541705 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.541711 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541716 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541721 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541727 | orchestrator | 2025-06-02 14:23:38.541732 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.541738 | orchestrator | Monday 02 June 2025 14:21:06 +0000 (0:00:01.197) 0:08:17.809 *********** 2025-06-02 14:23:38.541743 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.541748 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.541754 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.541759 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541765 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541770 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541775 | orchestrator | 2025-06-02 14:23:38.541781 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.541786 | orchestrator | Monday 02 June 2025 14:21:08 +0000 (0:00:01.382) 0:08:19.191 *********** 2025-06-02 14:23:38.541792 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541797 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541808 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541813 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541818 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541824 | orchestrator | 2025-06-02 14:23:38.541829 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.541835 | orchestrator | Monday 02 June 2025 14:21:08 +0000 (0:00:00.639) 0:08:19.831 *********** 2025-06-02 14:23:38.541840 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.541846 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.541851 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.541856 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.541862 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.541867 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.541873 | orchestrator | 2025-06-02 14:23:38.541878 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.541883 | orchestrator | Monday 02 June 2025 14:21:09 +0000 (0:00:00.883) 0:08:20.714 *********** 2025-06-02 14:23:38.541889 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541894 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541900 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541905 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541910 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541916 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541921 | orchestrator | 2025-06-02 14:23:38.541927 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.541932 | orchestrator | Monday 02 June 2025 14:21:10 +0000 (0:00:00.638) 0:08:21.353 *********** 2025-06-02 14:23:38.541937 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.541948 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.541954 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.541959 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.541964 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.541970 | orchestrator | 2025-06-02 14:23:38.541975 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.541983 | orchestrator | Monday 02 June 2025 14:21:11 +0000 (0:00:00.870) 0:08:22.224 *********** 2025-06-02 14:23:38.541989 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.541994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.542000 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.542005 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542011 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542030 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542036 | orchestrator | 2025-06-02 14:23:38.542041 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.542051 | orchestrator | Monday 02 June 2025 14:21:11 +0000 (0:00:00.632) 0:08:22.856 *********** 2025-06-02 14:23:38.542056 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.542062 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.542067 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.542072 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542078 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542083 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542088 | orchestrator | 2025-06-02 14:23:38.542094 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.542099 | orchestrator | Monday 02 June 2025 14:21:12 +0000 (0:00:00.838) 0:08:23.694 *********** 2025-06-02 14:23:38.542105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:23:38.542110 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:23:38.542116 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:23:38.542121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542131 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542137 | orchestrator | 2025-06-02 14:23:38.542142 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.542148 | orchestrator | Monday 02 June 2025 14:21:13 +0000 (0:00:00.588) 0:08:24.283 *********** 2025-06-02 14:23:38.542153 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542158 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.542164 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.542169 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542174 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542180 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542185 | orchestrator | 2025-06-02 14:23:38.542191 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.542199 | orchestrator | Monday 02 June 2025 14:21:14 +0000 (0:00:00.856) 0:08:25.139 *********** 2025-06-02 14:23:38.542204 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542209 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.542214 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.542218 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542223 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542228 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542233 | orchestrator | 2025-06-02 14:23:38.542237 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.542242 | orchestrator | Monday 02 June 2025 14:21:14 +0000 (0:00:00.637) 0:08:25.777 *********** 2025-06-02 14:23:38.542247 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542252 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.542256 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.542261 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542266 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542271 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542275 | orchestrator | 2025-06-02 14:23:38.542280 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 14:23:38.542285 | orchestrator | Monday 02 June 2025 14:21:16 +0000 (0:00:01.331) 0:08:27.109 *********** 2025-06-02 14:23:38.542290 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.542295 | orchestrator | 2025-06-02 14:23:38.542300 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 14:23:38.542304 | orchestrator | Monday 02 June 2025 14:21:19 +0000 (0:00:03.847) 0:08:30.957 *********** 2025-06-02 14:23:38.542309 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542314 | orchestrator | 2025-06-02 14:23:38.542319 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 14:23:38.542324 | orchestrator | Monday 02 June 2025 14:21:21 +0000 (0:00:01.934) 0:08:32.891 *********** 2025-06-02 14:23:38.542328 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542333 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.542342 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.542346 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.542351 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.542356 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.542361 | orchestrator | 2025-06-02 14:23:38.542366 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 14:23:38.542370 | orchestrator | Monday 02 June 2025 14:21:23 +0000 (0:00:01.803) 0:08:34.694 *********** 2025-06-02 14:23:38.542375 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.542380 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.542384 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.542389 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.542394 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.542399 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.542403 | orchestrator | 2025-06-02 14:23:38.542408 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 14:23:38.542413 | orchestrator | Monday 02 June 2025 14:21:24 +0000 (0:00:01.013) 0:08:35.708 *********** 2025-06-02 14:23:38.542418 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.542423 | orchestrator | 2025-06-02 14:23:38.542428 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 14:23:38.542433 | orchestrator | Monday 02 June 2025 14:21:26 +0000 (0:00:01.493) 0:08:37.201 *********** 2025-06-02 14:23:38.542438 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.542442 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.542447 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.542452 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.542457 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.542461 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.542466 | orchestrator | 2025-06-02 14:23:38.542474 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 14:23:38.542479 | orchestrator | Monday 02 June 2025 14:21:28 +0000 (0:00:02.363) 0:08:39.565 *********** 2025-06-02 14:23:38.542484 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.542488 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.542493 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.542498 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.542503 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.542507 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.542512 | orchestrator | 2025-06-02 14:23:38.542517 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 14:23:38.542522 | orchestrator | Monday 02 June 2025 14:21:31 +0000 (0:00:03.024) 0:08:42.589 *********** 2025-06-02 14:23:38.542527 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.542532 | orchestrator | 2025-06-02 14:23:38.542537 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 14:23:38.542541 | orchestrator | Monday 02 June 2025 14:21:32 +0000 (0:00:01.123) 0:08:43.712 *********** 2025-06-02 14:23:38.542546 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542551 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.542556 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.542561 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542565 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542570 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542575 | orchestrator | 2025-06-02 14:23:38.542580 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 14:23:38.542585 | orchestrator | Monday 02 June 2025 14:21:33 +0000 (0:00:00.696) 0:08:44.408 *********** 2025-06-02 14:23:38.542589 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:23:38.542594 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:23:38.542603 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:23:38.542608 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.542612 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.542617 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.542622 | orchestrator | 2025-06-02 14:23:38.542627 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 14:23:38.542632 | orchestrator | Monday 02 June 2025 14:21:35 +0000 (0:00:02.022) 0:08:46.431 *********** 2025-06-02 14:23:38.542636 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:23:38.542655 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:23:38.542660 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:23:38.542665 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542670 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542675 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542679 | orchestrator | 2025-06-02 14:23:38.542684 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 14:23:38.542689 | orchestrator | 2025-06-02 14:23:38.542694 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.542698 | orchestrator | Monday 02 June 2025 14:21:36 +0000 (0:00:01.137) 0:08:47.569 *********** 2025-06-02 14:23:38.542703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.542708 | orchestrator | 2025-06-02 14:23:38.542713 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.542718 | orchestrator | Monday 02 June 2025 14:21:36 +0000 (0:00:00.480) 0:08:48.049 *********** 2025-06-02 14:23:38.542722 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.542727 | orchestrator | 2025-06-02 14:23:38.542732 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.542737 | orchestrator | Monday 02 June 2025 14:21:37 +0000 (0:00:00.618) 0:08:48.668 *********** 2025-06-02 14:23:38.542741 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542746 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542751 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542756 | orchestrator | 2025-06-02 14:23:38.542761 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.542765 | orchestrator | Monday 02 June 2025 14:21:37 +0000 (0:00:00.275) 0:08:48.943 *********** 2025-06-02 14:23:38.542770 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542775 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542779 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542784 | orchestrator | 2025-06-02 14:23:38.542789 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.542794 | orchestrator | Monday 02 June 2025 14:21:38 +0000 (0:00:00.681) 0:08:49.624 *********** 2025-06-02 14:23:38.542799 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542803 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542808 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542813 | orchestrator | 2025-06-02 14:23:38.542818 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.542822 | orchestrator | Monday 02 June 2025 14:21:39 +0000 (0:00:00.952) 0:08:50.577 *********** 2025-06-02 14:23:38.542827 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542832 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542837 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542841 | orchestrator | 2025-06-02 14:23:38.542846 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.542851 | orchestrator | Monday 02 June 2025 14:21:40 +0000 (0:00:00.791) 0:08:51.369 *********** 2025-06-02 14:23:38.542856 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542861 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542865 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542870 | orchestrator | 2025-06-02 14:23:38.542879 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.542883 | orchestrator | Monday 02 June 2025 14:21:40 +0000 (0:00:00.343) 0:08:51.712 *********** 2025-06-02 14:23:38.542888 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542893 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542902 | orchestrator | 2025-06-02 14:23:38.542910 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.542915 | orchestrator | Monday 02 June 2025 14:21:40 +0000 (0:00:00.262) 0:08:51.974 *********** 2025-06-02 14:23:38.542920 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.542925 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.542929 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.542934 | orchestrator | 2025-06-02 14:23:38.542939 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.542944 | orchestrator | Monday 02 June 2025 14:21:41 +0000 (0:00:00.453) 0:08:52.428 *********** 2025-06-02 14:23:38.542949 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542953 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542958 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542963 | orchestrator | 2025-06-02 14:23:38.542968 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.542972 | orchestrator | Monday 02 June 2025 14:21:42 +0000 (0:00:00.717) 0:08:53.146 *********** 2025-06-02 14:23:38.542977 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.542982 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.542986 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.542991 | orchestrator | 2025-06-02 14:23:38.542996 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.543001 | orchestrator | Monday 02 June 2025 14:21:42 +0000 (0:00:00.715) 0:08:53.861 *********** 2025-06-02 14:23:38.543006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543010 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543015 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543020 | orchestrator | 2025-06-02 14:23:38.543025 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.543029 | orchestrator | Monday 02 June 2025 14:21:43 +0000 (0:00:00.298) 0:08:54.160 *********** 2025-06-02 14:23:38.543034 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543039 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543043 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543048 | orchestrator | 2025-06-02 14:23:38.543053 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.543058 | orchestrator | Monday 02 June 2025 14:21:43 +0000 (0:00:00.567) 0:08:54.728 *********** 2025-06-02 14:23:38.543062 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543067 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543072 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543077 | orchestrator | 2025-06-02 14:23:38.543085 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.543090 | orchestrator | Monday 02 June 2025 14:21:44 +0000 (0:00:00.353) 0:08:55.081 *********** 2025-06-02 14:23:38.543094 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543099 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543104 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543109 | orchestrator | 2025-06-02 14:23:38.543114 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.543118 | orchestrator | Monday 02 June 2025 14:21:44 +0000 (0:00:00.345) 0:08:55.427 *********** 2025-06-02 14:23:38.543123 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543128 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543133 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543137 | orchestrator | 2025-06-02 14:23:38.543142 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.543147 | orchestrator | Monday 02 June 2025 14:21:44 +0000 (0:00:00.358) 0:08:55.786 *********** 2025-06-02 14:23:38.543156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543160 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543170 | orchestrator | 2025-06-02 14:23:38.543175 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.543180 | orchestrator | Monday 02 June 2025 14:21:45 +0000 (0:00:00.639) 0:08:56.426 *********** 2025-06-02 14:23:38.543184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543194 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543199 | orchestrator | 2025-06-02 14:23:38.543203 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.543208 | orchestrator | Monday 02 June 2025 14:21:45 +0000 (0:00:00.344) 0:08:56.770 *********** 2025-06-02 14:23:38.543213 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543218 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543222 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543227 | orchestrator | 2025-06-02 14:23:38.543232 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.543237 | orchestrator | Monday 02 June 2025 14:21:46 +0000 (0:00:00.362) 0:08:57.133 *********** 2025-06-02 14:23:38.543242 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543246 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543251 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543256 | orchestrator | 2025-06-02 14:23:38.543261 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.543266 | orchestrator | Monday 02 June 2025 14:21:46 +0000 (0:00:00.408) 0:08:57.541 *********** 2025-06-02 14:23:38.543270 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543275 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543280 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543284 | orchestrator | 2025-06-02 14:23:38.543289 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 14:23:38.543294 | orchestrator | Monday 02 June 2025 14:21:47 +0000 (0:00:00.878) 0:08:58.419 *********** 2025-06-02 14:23:38.543299 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543304 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543309 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 14:23:38.543313 | orchestrator | 2025-06-02 14:23:38.543318 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 14:23:38.543323 | orchestrator | Monday 02 June 2025 14:21:47 +0000 (0:00:00.401) 0:08:58.821 *********** 2025-06-02 14:23:38.543328 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.543332 | orchestrator | 2025-06-02 14:23:38.543337 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 14:23:38.543342 | orchestrator | Monday 02 June 2025 14:21:49 +0000 (0:00:02.100) 0:09:00.921 *********** 2025-06-02 14:23:38.543348 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 14:23:38.543355 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543359 | orchestrator | 2025-06-02 14:23:38.543364 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 14:23:38.543369 | orchestrator | Monday 02 June 2025 14:21:50 +0000 (0:00:00.216) 0:09:01.138 *********** 2025-06-02 14:23:38.543376 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:23:38.543387 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:23:38.543396 | orchestrator | 2025-06-02 14:23:38.543401 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 14:23:38.543405 | orchestrator | Monday 02 June 2025 14:21:57 +0000 (0:00:07.267) 0:09:08.406 *********** 2025-06-02 14:23:38.543410 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 14:23:38.543415 | orchestrator | 2025-06-02 14:23:38.543420 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 14:23:38.543424 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:03.327) 0:09:11.733 *********** 2025-06-02 14:23:38.543429 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.543434 | orchestrator | 2025-06-02 14:23:38.543441 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 14:23:38.543446 | orchestrator | Monday 02 June 2025 14:22:01 +0000 (0:00:00.510) 0:09:12.243 *********** 2025-06-02 14:23:38.543451 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 14:23:38.543456 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 14:23:38.543461 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 14:23:38.543465 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 14:23:38.543470 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 14:23:38.543475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 14:23:38.543480 | orchestrator | 2025-06-02 14:23:38.543484 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 14:23:38.543489 | orchestrator | Monday 02 June 2025 14:22:02 +0000 (0:00:00.983) 0:09:13.227 *********** 2025-06-02 14:23:38.543494 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.543499 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.543504 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.543508 | orchestrator | 2025-06-02 14:23:38.543513 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 14:23:38.543518 | orchestrator | Monday 02 June 2025 14:22:04 +0000 (0:00:02.438) 0:09:15.666 *********** 2025-06-02 14:23:38.543523 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 14:23:38.543528 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.543532 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543537 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 14:23:38.543542 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 14:23:38.543547 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543552 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 14:23:38.543571 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 14:23:38.543576 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543581 | orchestrator | 2025-06-02 14:23:38.543586 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 14:23:38.543591 | orchestrator | Monday 02 June 2025 14:22:06 +0000 (0:00:01.577) 0:09:17.244 *********** 2025-06-02 14:23:38.543596 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543600 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543605 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543610 | orchestrator | 2025-06-02 14:23:38.543615 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 14:23:38.543620 | orchestrator | Monday 02 June 2025 14:22:08 +0000 (0:00:02.766) 0:09:20.010 *********** 2025-06-02 14:23:38.543628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.543633 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.543637 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.543666 | orchestrator | 2025-06-02 14:23:38.543671 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 14:23:38.543676 | orchestrator | Monday 02 June 2025 14:22:09 +0000 (0:00:00.385) 0:09:20.395 *********** 2025-06-02 14:23:38.543681 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.543686 | orchestrator | 2025-06-02 14:23:38.543691 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 14:23:38.543699 | orchestrator | Monday 02 June 2025 14:22:10 +0000 (0:00:00.906) 0:09:21.302 *********** 2025-06-02 14:23:38.543704 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.543709 | orchestrator | 2025-06-02 14:23:38.543713 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 14:23:38.543718 | orchestrator | Monday 02 June 2025 14:22:10 +0000 (0:00:00.543) 0:09:21.845 *********** 2025-06-02 14:23:38.543723 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543728 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543733 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543737 | orchestrator | 2025-06-02 14:23:38.543742 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 14:23:38.543747 | orchestrator | Monday 02 June 2025 14:22:11 +0000 (0:00:01.226) 0:09:23.071 *********** 2025-06-02 14:23:38.543752 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543756 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543761 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543766 | orchestrator | 2025-06-02 14:23:38.543771 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 14:23:38.543776 | orchestrator | Monday 02 June 2025 14:22:13 +0000 (0:00:01.608) 0:09:24.681 *********** 2025-06-02 14:23:38.543781 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543787 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543794 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543802 | orchestrator | 2025-06-02 14:23:38.543811 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 14:23:38.543817 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:01.838) 0:09:26.520 *********** 2025-06-02 14:23:38.543825 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543833 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543841 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543848 | orchestrator | 2025-06-02 14:23:38.543857 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 14:23:38.543862 | orchestrator | Monday 02 June 2025 14:22:17 +0000 (0:00:01.999) 0:09:28.520 *********** 2025-06-02 14:23:38.543867 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543872 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543876 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543881 | orchestrator | 2025-06-02 14:23:38.543890 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.543895 | orchestrator | Monday 02 June 2025 14:22:18 +0000 (0:00:01.531) 0:09:30.051 *********** 2025-06-02 14:23:38.543900 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543904 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543909 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543914 | orchestrator | 2025-06-02 14:23:38.543919 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 14:23:38.543923 | orchestrator | Monday 02 June 2025 14:22:19 +0000 (0:00:00.735) 0:09:30.787 *********** 2025-06-02 14:23:38.543928 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.543933 | orchestrator | 2025-06-02 14:23:38.543944 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 14:23:38.543948 | orchestrator | Monday 02 June 2025 14:22:20 +0000 (0:00:00.660) 0:09:31.448 *********** 2025-06-02 14:23:38.543953 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.543958 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.543963 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.543968 | orchestrator | 2025-06-02 14:23:38.543972 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 14:23:38.543977 | orchestrator | Monday 02 June 2025 14:22:20 +0000 (0:00:00.341) 0:09:31.790 *********** 2025-06-02 14:23:38.543982 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.543987 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.543992 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.543996 | orchestrator | 2025-06-02 14:23:38.544001 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 14:23:38.544006 | orchestrator | Monday 02 June 2025 14:22:22 +0000 (0:00:01.298) 0:09:33.088 *********** 2025-06-02 14:23:38.544011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.544015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.544020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.544025 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544030 | orchestrator | 2025-06-02 14:23:38.544035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 14:23:38.544040 | orchestrator | Monday 02 June 2025 14:22:22 +0000 (0:00:00.767) 0:09:33.856 *********** 2025-06-02 14:23:38.544044 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544049 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544054 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544059 | orchestrator | 2025-06-02 14:23:38.544064 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 14:23:38.544068 | orchestrator | 2025-06-02 14:23:38.544073 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 14:23:38.544078 | orchestrator | Monday 02 June 2025 14:22:23 +0000 (0:00:00.869) 0:09:34.725 *********** 2025-06-02 14:23:38.544083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.544088 | orchestrator | 2025-06-02 14:23:38.544093 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 14:23:38.544097 | orchestrator | Monday 02 June 2025 14:22:24 +0000 (0:00:00.562) 0:09:35.287 *********** 2025-06-02 14:23:38.544102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.544106 | orchestrator | 2025-06-02 14:23:38.544111 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 14:23:38.544115 | orchestrator | Monday 02 June 2025 14:22:24 +0000 (0:00:00.661) 0:09:35.949 *********** 2025-06-02 14:23:38.544123 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544128 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544133 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544137 | orchestrator | 2025-06-02 14:23:38.544142 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 14:23:38.544146 | orchestrator | Monday 02 June 2025 14:22:25 +0000 (0:00:00.318) 0:09:36.268 *********** 2025-06-02 14:23:38.544151 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544155 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544160 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544164 | orchestrator | 2025-06-02 14:23:38.544169 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 14:23:38.544173 | orchestrator | Monday 02 June 2025 14:22:25 +0000 (0:00:00.771) 0:09:37.040 *********** 2025-06-02 14:23:38.544178 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544182 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544191 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544195 | orchestrator | 2025-06-02 14:23:38.544200 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 14:23:38.544204 | orchestrator | Monday 02 June 2025 14:22:26 +0000 (0:00:00.748) 0:09:37.788 *********** 2025-06-02 14:23:38.544209 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544213 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544218 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544223 | orchestrator | 2025-06-02 14:23:38.544227 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 14:23:38.544232 | orchestrator | Monday 02 June 2025 14:22:27 +0000 (0:00:00.977) 0:09:38.766 *********** 2025-06-02 14:23:38.544236 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544241 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544245 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544250 | orchestrator | 2025-06-02 14:23:38.544254 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 14:23:38.544259 | orchestrator | Monday 02 June 2025 14:22:28 +0000 (0:00:00.334) 0:09:39.100 *********** 2025-06-02 14:23:38.544263 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544268 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544272 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544277 | orchestrator | 2025-06-02 14:23:38.544281 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 14:23:38.544288 | orchestrator | Monday 02 June 2025 14:22:28 +0000 (0:00:00.382) 0:09:39.483 *********** 2025-06-02 14:23:38.544293 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544297 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544302 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544306 | orchestrator | 2025-06-02 14:23:38.544311 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 14:23:38.544315 | orchestrator | Monday 02 June 2025 14:22:28 +0000 (0:00:00.327) 0:09:39.811 *********** 2025-06-02 14:23:38.544320 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544325 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544329 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544334 | orchestrator | 2025-06-02 14:23:38.544338 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 14:23:38.544343 | orchestrator | Monday 02 June 2025 14:22:30 +0000 (0:00:01.385) 0:09:41.197 *********** 2025-06-02 14:23:38.544347 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544352 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544356 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544361 | orchestrator | 2025-06-02 14:23:38.544365 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 14:23:38.544370 | orchestrator | Monday 02 June 2025 14:22:30 +0000 (0:00:00.791) 0:09:41.988 *********** 2025-06-02 14:23:38.544374 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544379 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544383 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544388 | orchestrator | 2025-06-02 14:23:38.544392 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 14:23:38.544397 | orchestrator | Monday 02 June 2025 14:22:31 +0000 (0:00:00.338) 0:09:42.327 *********** 2025-06-02 14:23:38.544401 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544406 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544410 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544415 | orchestrator | 2025-06-02 14:23:38.544419 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 14:23:38.544424 | orchestrator | Monday 02 June 2025 14:22:31 +0000 (0:00:00.328) 0:09:42.655 *********** 2025-06-02 14:23:38.544428 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544433 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544437 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544442 | orchestrator | 2025-06-02 14:23:38.544446 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 14:23:38.544455 | orchestrator | Monday 02 June 2025 14:22:32 +0000 (0:00:00.759) 0:09:43.414 *********** 2025-06-02 14:23:38.544459 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544464 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544468 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544473 | orchestrator | 2025-06-02 14:23:38.544477 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 14:23:38.544482 | orchestrator | Monday 02 June 2025 14:22:32 +0000 (0:00:00.367) 0:09:43.782 *********** 2025-06-02 14:23:38.544486 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544491 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544495 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544500 | orchestrator | 2025-06-02 14:23:38.544504 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 14:23:38.544509 | orchestrator | Monday 02 June 2025 14:22:33 +0000 (0:00:00.368) 0:09:44.151 *********** 2025-06-02 14:23:38.544513 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544522 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544527 | orchestrator | 2025-06-02 14:23:38.544531 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 14:23:38.544536 | orchestrator | Monday 02 June 2025 14:22:33 +0000 (0:00:00.315) 0:09:44.466 *********** 2025-06-02 14:23:38.544540 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544545 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544553 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544557 | orchestrator | 2025-06-02 14:23:38.544562 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 14:23:38.544566 | orchestrator | Monday 02 June 2025 14:22:34 +0000 (0:00:00.784) 0:09:45.251 *********** 2025-06-02 14:23:38.544571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544575 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544580 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544584 | orchestrator | 2025-06-02 14:23:38.544589 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 14:23:38.544593 | orchestrator | Monday 02 June 2025 14:22:34 +0000 (0:00:00.406) 0:09:45.657 *********** 2025-06-02 14:23:38.544598 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544602 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544607 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544612 | orchestrator | 2025-06-02 14:23:38.544616 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 14:23:38.544621 | orchestrator | Monday 02 June 2025 14:22:34 +0000 (0:00:00.327) 0:09:45.985 *********** 2025-06-02 14:23:38.544625 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.544630 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.544634 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.544651 | orchestrator | 2025-06-02 14:23:38.544656 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 14:23:38.544661 | orchestrator | Monday 02 June 2025 14:22:35 +0000 (0:00:00.844) 0:09:46.829 *********** 2025-06-02 14:23:38.544665 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.544670 | orchestrator | 2025-06-02 14:23:38.544674 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 14:23:38.544679 | orchestrator | Monday 02 June 2025 14:22:36 +0000 (0:00:00.585) 0:09:47.415 *********** 2025-06-02 14:23:38.544683 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544688 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.544693 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.544697 | orchestrator | 2025-06-02 14:23:38.544702 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 14:23:38.544712 | orchestrator | Monday 02 June 2025 14:22:38 +0000 (0:00:02.107) 0:09:49.523 *********** 2025-06-02 14:23:38.544717 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 14:23:38.544722 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 14:23:38.544726 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.544731 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 14:23:38.544735 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 14:23:38.544740 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.544744 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 14:23:38.544749 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 14:23:38.544753 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.544758 | orchestrator | 2025-06-02 14:23:38.544762 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 14:23:38.544767 | orchestrator | Monday 02 June 2025 14:22:39 +0000 (0:00:01.461) 0:09:50.984 *********** 2025-06-02 14:23:38.544772 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.544776 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.544781 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.544785 | orchestrator | 2025-06-02 14:23:38.544790 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 14:23:38.544794 | orchestrator | Monday 02 June 2025 14:22:40 +0000 (0:00:00.328) 0:09:51.312 *********** 2025-06-02 14:23:38.544799 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.544803 | orchestrator | 2025-06-02 14:23:38.544808 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 14:23:38.544812 | orchestrator | Monday 02 June 2025 14:22:40 +0000 (0:00:00.536) 0:09:51.849 *********** 2025-06-02 14:23:38.544817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.544822 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.544827 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.544831 | orchestrator | 2025-06-02 14:23:38.544836 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 14:23:38.544840 | orchestrator | Monday 02 June 2025 14:22:41 +0000 (0:00:01.124) 0:09:52.973 *********** 2025-06-02 14:23:38.544845 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544849 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 14:23:38.544854 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544858 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 14:23:38.544863 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544871 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 14:23:38.544876 | orchestrator | 2025-06-02 14:23:38.544880 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 14:23:38.544885 | orchestrator | Monday 02 June 2025 14:22:46 +0000 (0:00:04.741) 0:09:57.715 *********** 2025-06-02 14:23:38.544889 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544894 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.544898 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544906 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.544911 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:23:38.544915 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:23:38.544920 | orchestrator | 2025-06-02 14:23:38.544924 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 14:23:38.544929 | orchestrator | Monday 02 June 2025 14:22:48 +0000 (0:00:02.289) 0:10:00.005 *********** 2025-06-02 14:23:38.544933 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 14:23:38.544938 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.544942 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 14:23:38.544947 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.544952 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 14:23:38.544956 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.544960 | orchestrator | 2025-06-02 14:23:38.544965 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 14:23:38.544970 | orchestrator | Monday 02 June 2025 14:22:50 +0000 (0:00:01.259) 0:10:01.265 *********** 2025-06-02 14:23:38.544974 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 14:23:38.544979 | orchestrator | 2025-06-02 14:23:38.544983 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 14:23:38.544988 | orchestrator | Monday 02 June 2025 14:22:50 +0000 (0:00:00.226) 0:10:01.492 *********** 2025-06-02 14:23:38.544995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.544999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545018 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545023 | orchestrator | 2025-06-02 14:23:38.545027 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 14:23:38.545032 | orchestrator | Monday 02 June 2025 14:22:51 +0000 (0:00:00.880) 0:10:02.372 *********** 2025-06-02 14:23:38.545036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 14:23:38.545059 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545064 | orchestrator | 2025-06-02 14:23:38.545068 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 14:23:38.545073 | orchestrator | Monday 02 June 2025 14:22:52 +0000 (0:00:01.073) 0:10:03.446 *********** 2025-06-02 14:23:38.545077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 14:23:38.545086 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 14:23:38.545090 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 14:23:38.545095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 14:23:38.545102 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 14:23:38.545107 | orchestrator | 2025-06-02 14:23:38.545112 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 14:23:38.545117 | orchestrator | Monday 02 June 2025 14:23:23 +0000 (0:00:30.678) 0:10:34.124 *********** 2025-06-02 14:23:38.545121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.545130 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.545135 | orchestrator | 2025-06-02 14:23:38.545139 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 14:23:38.545144 | orchestrator | Monday 02 June 2025 14:23:23 +0000 (0:00:00.344) 0:10:34.469 *********** 2025-06-02 14:23:38.545148 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545153 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.545158 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.545162 | orchestrator | 2025-06-02 14:23:38.545167 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 14:23:38.545171 | orchestrator | Monday 02 June 2025 14:23:23 +0000 (0:00:00.334) 0:10:34.804 *********** 2025-06-02 14:23:38.545176 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.545180 | orchestrator | 2025-06-02 14:23:38.545185 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 14:23:38.545189 | orchestrator | Monday 02 June 2025 14:23:24 +0000 (0:00:00.912) 0:10:35.716 *********** 2025-06-02 14:23:38.545194 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.545199 | orchestrator | 2025-06-02 14:23:38.545203 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 14:23:38.545208 | orchestrator | Monday 02 June 2025 14:23:25 +0000 (0:00:00.554) 0:10:36.271 *********** 2025-06-02 14:23:38.545212 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.545217 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.545221 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.545226 | orchestrator | 2025-06-02 14:23:38.545230 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 14:23:38.545235 | orchestrator | Monday 02 June 2025 14:23:26 +0000 (0:00:01.395) 0:10:37.666 *********** 2025-06-02 14:23:38.545242 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.545247 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.545251 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.545256 | orchestrator | 2025-06-02 14:23:38.545260 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 14:23:38.545265 | orchestrator | Monday 02 June 2025 14:23:28 +0000 (0:00:01.430) 0:10:39.097 *********** 2025-06-02 14:23:38.545269 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:23:38.545274 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:23:38.545279 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:23:38.545283 | orchestrator | 2025-06-02 14:23:38.545288 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 14:23:38.545292 | orchestrator | Monday 02 June 2025 14:23:30 +0000 (0:00:02.815) 0:10:41.912 *********** 2025-06-02 14:23:38.545300 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.545305 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.545309 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 14:23:38.545314 | orchestrator | 2025-06-02 14:23:38.545318 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 14:23:38.545323 | orchestrator | Monday 02 June 2025 14:23:33 +0000 (0:00:02.762) 0:10:44.675 *********** 2025-06-02 14:23:38.545327 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545332 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.545337 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.545341 | orchestrator | 2025-06-02 14:23:38.545346 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 14:23:38.545350 | orchestrator | Monday 02 June 2025 14:23:33 +0000 (0:00:00.366) 0:10:45.041 *********** 2025-06-02 14:23:38.545355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:23:38.545359 | orchestrator | 2025-06-02 14:23:38.545364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 14:23:38.545368 | orchestrator | Monday 02 June 2025 14:23:34 +0000 (0:00:00.530) 0:10:45.571 *********** 2025-06-02 14:23:38.545373 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.545378 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.545382 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.545387 | orchestrator | 2025-06-02 14:23:38.545391 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 14:23:38.545396 | orchestrator | Monday 02 June 2025 14:23:35 +0000 (0:00:00.587) 0:10:46.158 *********** 2025-06-02 14:23:38.545400 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:23:38.545409 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:23:38.545414 | orchestrator | 2025-06-02 14:23:38.545419 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 14:23:38.545423 | orchestrator | Monday 02 June 2025 14:23:35 +0000 (0:00:00.371) 0:10:46.529 *********** 2025-06-02 14:23:38.545428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:23:38.545432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:23:38.545437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:23:38.545442 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:23:38.545446 | orchestrator | 2025-06-02 14:23:38.545453 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 14:23:38.545458 | orchestrator | Monday 02 June 2025 14:23:36 +0000 (0:00:00.617) 0:10:47.147 *********** 2025-06-02 14:23:38.545462 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:23:38.545467 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:23:38.545472 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:23:38.545476 | orchestrator | 2025-06-02 14:23:38.545481 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:23:38.545485 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 14:23:38.545490 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 14:23:38.545495 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 14:23:38.545499 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 14:23:38.545507 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 14:23:38.545512 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 14:23:38.545516 | orchestrator | 2025-06-02 14:23:38.545521 | orchestrator | 2025-06-02 14:23:38.545525 | orchestrator | 2025-06-02 14:23:38.545530 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:23:38.545535 | orchestrator | Monday 02 June 2025 14:23:36 +0000 (0:00:00.240) 0:10:47.387 *********** 2025-06-02 14:23:38.545539 | orchestrator | =============================================================================== 2025-06-02 14:23:38.545544 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 62.30s 2025-06-02 14:23:38.545550 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.88s 2025-06-02 14:23:38.545555 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.68s 2025-06-02 14:23:38.545560 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.98s 2025-06-02 14:23:38.545564 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.82s 2025-06-02 14:23:38.545569 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.02s 2025-06-02 14:23:38.545573 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.96s 2025-06-02 14:23:38.545578 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.75s 2025-06-02 14:23:38.545582 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.27s 2025-06-02 14:23:38.545587 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.58s 2025-06-02 14:23:38.545592 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.38s 2025-06-02 14:23:38.545596 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.74s 2025-06-02 14:23:38.545601 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.61s 2025-06-02 14:23:38.545605 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.19s 2025-06-02 14:23:38.545610 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.85s 2025-06-02 14:23:38.545614 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.55s 2025-06-02 14:23:38.545619 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.50s 2025-06-02 14:23:38.545624 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.33s 2025-06-02 14:23:38.545628 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.08s 2025-06-02 14:23:38.545633 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.02s 2025-06-02 14:23:38.545637 | orchestrator | 2025-06-02 14:23:38 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:38.545655 | orchestrator | 2025-06-02 14:23:38 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:38.545660 | orchestrator | 2025-06-02 14:23:38 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:38.545665 | orchestrator | 2025-06-02 14:23:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:41.581158 | orchestrator | 2025-06-02 14:23:41 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:41.582352 | orchestrator | 2025-06-02 14:23:41 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:41.587930 | orchestrator | 2025-06-02 14:23:41 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:41.588005 | orchestrator | 2025-06-02 14:23:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:44.636506 | orchestrator | 2025-06-02 14:23:44 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:44.637560 | orchestrator | 2025-06-02 14:23:44 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:44.638462 | orchestrator | 2025-06-02 14:23:44 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:44.638482 | orchestrator | 2025-06-02 14:23:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:47.697828 | orchestrator | 2025-06-02 14:23:47 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:47.699855 | orchestrator | 2025-06-02 14:23:47 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:47.703444 | orchestrator | 2025-06-02 14:23:47 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:47.703782 | orchestrator | 2025-06-02 14:23:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:50.763418 | orchestrator | 2025-06-02 14:23:50 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:50.763823 | orchestrator | 2025-06-02 14:23:50 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:50.765334 | orchestrator | 2025-06-02 14:23:50 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:50.765376 | orchestrator | 2025-06-02 14:23:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:53.836195 | orchestrator | 2025-06-02 14:23:53 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:53.838009 | orchestrator | 2025-06-02 14:23:53 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:53.838863 | orchestrator | 2025-06-02 14:23:53 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:53.839276 | orchestrator | 2025-06-02 14:23:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:56.885120 | orchestrator | 2025-06-02 14:23:56 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:56.885225 | orchestrator | 2025-06-02 14:23:56 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:56.886281 | orchestrator | 2025-06-02 14:23:56 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:56.886308 | orchestrator | 2025-06-02 14:23:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:23:59.943531 | orchestrator | 2025-06-02 14:23:59 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:23:59.945666 | orchestrator | 2025-06-02 14:23:59 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:23:59.947868 | orchestrator | 2025-06-02 14:23:59 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:23:59.947922 | orchestrator | 2025-06-02 14:23:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:02.996422 | orchestrator | 2025-06-02 14:24:02 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:02.998320 | orchestrator | 2025-06-02 14:24:02 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:02.999838 | orchestrator | 2025-06-02 14:24:02 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:03.000086 | orchestrator | 2025-06-02 14:24:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:06.041575 | orchestrator | 2025-06-02 14:24:06 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:06.043583 | orchestrator | 2025-06-02 14:24:06 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:06.045641 | orchestrator | 2025-06-02 14:24:06 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:06.045859 | orchestrator | 2025-06-02 14:24:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:09.094881 | orchestrator | 2025-06-02 14:24:09 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:09.095876 | orchestrator | 2025-06-02 14:24:09 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:09.097591 | orchestrator | 2025-06-02 14:24:09 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:09.097695 | orchestrator | 2025-06-02 14:24:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:12.142687 | orchestrator | 2025-06-02 14:24:12 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:12.145033 | orchestrator | 2025-06-02 14:24:12 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:12.147544 | orchestrator | 2025-06-02 14:24:12 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:12.147589 | orchestrator | 2025-06-02 14:24:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:15.195296 | orchestrator | 2025-06-02 14:24:15 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:15.196357 | orchestrator | 2025-06-02 14:24:15 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:15.197935 | orchestrator | 2025-06-02 14:24:15 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:15.197963 | orchestrator | 2025-06-02 14:24:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:18.250251 | orchestrator | 2025-06-02 14:24:18 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:18.252190 | orchestrator | 2025-06-02 14:24:18 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:18.254541 | orchestrator | 2025-06-02 14:24:18 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:18.255102 | orchestrator | 2025-06-02 14:24:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:21.305170 | orchestrator | 2025-06-02 14:24:21 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:21.307638 | orchestrator | 2025-06-02 14:24:21 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:21.309644 | orchestrator | 2025-06-02 14:24:21 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:21.309782 | orchestrator | 2025-06-02 14:24:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:24.356432 | orchestrator | 2025-06-02 14:24:24 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:24.358598 | orchestrator | 2025-06-02 14:24:24 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:24.360625 | orchestrator | 2025-06-02 14:24:24 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:24.360663 | orchestrator | 2025-06-02 14:24:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:27.410551 | orchestrator | 2025-06-02 14:24:27 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:27.412982 | orchestrator | 2025-06-02 14:24:27 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:27.415374 | orchestrator | 2025-06-02 14:24:27 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:27.415410 | orchestrator | 2025-06-02 14:24:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:30.465985 | orchestrator | 2025-06-02 14:24:30 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:30.468039 | orchestrator | 2025-06-02 14:24:30 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:30.470093 | orchestrator | 2025-06-02 14:24:30 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:30.470165 | orchestrator | 2025-06-02 14:24:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:33.515773 | orchestrator | 2025-06-02 14:24:33 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:33.517030 | orchestrator | 2025-06-02 14:24:33 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:33.518447 | orchestrator | 2025-06-02 14:24:33 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:33.518487 | orchestrator | 2025-06-02 14:24:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:36.575599 | orchestrator | 2025-06-02 14:24:36 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:36.577531 | orchestrator | 2025-06-02 14:24:36 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:36.579020 | orchestrator | 2025-06-02 14:24:36 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:36.579318 | orchestrator | 2025-06-02 14:24:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:39.645505 | orchestrator | 2025-06-02 14:24:39 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:39.647858 | orchestrator | 2025-06-02 14:24:39 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:39.649500 | orchestrator | 2025-06-02 14:24:39 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:39.649830 | orchestrator | 2025-06-02 14:24:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:42.695735 | orchestrator | 2025-06-02 14:24:42 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:42.699530 | orchestrator | 2025-06-02 14:24:42 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:42.701876 | orchestrator | 2025-06-02 14:24:42 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:42.701938 | orchestrator | 2025-06-02 14:24:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:45.742620 | orchestrator | 2025-06-02 14:24:45 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:45.744800 | orchestrator | 2025-06-02 14:24:45 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:45.746548 | orchestrator | 2025-06-02 14:24:45 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:45.747102 | orchestrator | 2025-06-02 14:24:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:48.793489 | orchestrator | 2025-06-02 14:24:48 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:48.794432 | orchestrator | 2025-06-02 14:24:48 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:48.795878 | orchestrator | 2025-06-02 14:24:48 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:48.795906 | orchestrator | 2025-06-02 14:24:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:51.841208 | orchestrator | 2025-06-02 14:24:51 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:51.843034 | orchestrator | 2025-06-02 14:24:51 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:51.844504 | orchestrator | 2025-06-02 14:24:51 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state STARTED 2025-06-02 14:24:51.845017 | orchestrator | 2025-06-02 14:24:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:54.885018 | orchestrator | 2025-06-02 14:24:54 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:54.886247 | orchestrator | 2025-06-02 14:24:54 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:54.888903 | orchestrator | 2025-06-02 14:24:54 | INFO  | Task 23206dd1-8717-4842-a32a-695a5324ff3b is in state SUCCESS 2025-06-02 14:24:54.889298 | orchestrator | 2025-06-02 14:24:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:24:54.891346 | orchestrator | 2025-06-02 14:24:54.891390 | orchestrator | 2025-06-02 14:24:54.891402 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:24:54.891414 | orchestrator | 2025-06-02 14:24:54.891426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:24:54.891437 | orchestrator | Monday 02 June 2025 14:21:56 +0000 (0:00:00.189) 0:00:00.189 *********** 2025-06-02 14:24:54.891448 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:24:54.891460 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:24:54.891471 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:24:54.891482 | orchestrator | 2025-06-02 14:24:54.891493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:24:54.891504 | orchestrator | Monday 02 June 2025 14:21:57 +0000 (0:00:00.219) 0:00:00.409 *********** 2025-06-02 14:24:54.891516 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 14:24:54.891527 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 14:24:54.891538 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 14:24:54.891548 | orchestrator | 2025-06-02 14:24:54.891560 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 14:24:54.891571 | orchestrator | 2025-06-02 14:24:54.891581 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 14:24:54.891592 | orchestrator | Monday 02 June 2025 14:21:57 +0000 (0:00:00.329) 0:00:00.738 *********** 2025-06-02 14:24:54.891603 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:24:54.891615 | orchestrator | 2025-06-02 14:24:54.891625 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 14:24:54.891636 | orchestrator | Monday 02 June 2025 14:21:58 +0000 (0:00:00.473) 0:00:01.211 *********** 2025-06-02 14:24:54.891647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 14:24:54.891662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 14:24:54.891682 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 14:24:54.891693 | orchestrator | 2025-06-02 14:24:54.891704 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 14:24:54.891715 | orchestrator | Monday 02 June 2025 14:21:58 +0000 (0:00:00.635) 0:00:01.847 *********** 2025-06-02 14:24:54.891749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.891765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.891791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.891807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.891826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.891847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.891859 | orchestrator | 2025-06-02 14:24:54.891870 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 14:24:54.891881 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:01.478) 0:00:03.326 *********** 2025-06-02 14:24:54.891892 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:24:54.891903 | orchestrator | 2025-06-02 14:24:54.891916 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 14:24:54.891928 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:00.515) 0:00:03.841 *********** 2025-06-02 14:24:54.891979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.891994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892096 | orchestrator | 2025-06-02 14:24:54.892109 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 14:24:54.892122 | orchestrator | Monday 02 June 2025 14:22:02 +0000 (0:00:02.355) 0:00:06.197 *********** 2025-06-02 14:24:54.892139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892167 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:24:54.892180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892222 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:24:54.892240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892268 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:24:54.892279 | orchestrator | 2025-06-02 14:24:54.892290 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 14:24:54.892301 | orchestrator | Monday 02 June 2025 14:22:04 +0000 (0:00:01.089) 0:00:07.286 *********** 2025-06-02 14:24:54.892312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:24:54.892365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892390 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:24:54.892402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 14:24:54.892421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 14:24:54.892441 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:24:54.892452 | orchestrator | 2025-06-02 14:24:54.892463 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 14:24:54.892474 | orchestrator | Monday 02 June 2025 14:22:04 +0000 (0:00:00.913) 0:00:08.199 *********** 2025-06-02 14:24:54.892488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892578 | orchestrator | 2025-06-02 14:24:54.892589 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 14:24:54.892600 | orchestrator | Monday 02 June 2025 14:22:07 +0000 (0:00:02.497) 0:00:10.697 *********** 2025-06-02 14:24:54.892611 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.892621 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:24:54.892632 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:24:54.892643 | orchestrator | 2025-06-02 14:24:54.892654 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 14:24:54.892665 | orchestrator | Monday 02 June 2025 14:22:11 +0000 (0:00:03.541) 0:00:14.238 *********** 2025-06-02 14:24:54.892675 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.892686 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:24:54.892697 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:24:54.892708 | orchestrator | 2025-06-02 14:24:54.892718 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 14:24:54.892729 | orchestrator | Monday 02 June 2025 14:22:12 +0000 (0:00:01.670) 0:00:15.908 *********** 2025-06-02 14:24:54.892740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 14:24:54.892795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 14:24:54.892845 | orchestrator | 2025-06-02 14:24:54.892855 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 14:24:54.892866 | orchestrator | Monday 02 June 2025 14:22:14 +0000 (0:00:02.198) 0:00:18.107 *********** 2025-06-02 14:24:54.892877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:24:54.892888 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:24:54.892899 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:24:54.892909 | orchestrator | 2025-06-02 14:24:54.892920 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 14:24:54.892931 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:00.317) 0:00:18.424 *********** 2025-06-02 14:24:54.892960 | orchestrator | 2025-06-02 14:24:54.892971 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 14:24:54.892982 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:00.070) 0:00:18.495 *********** 2025-06-02 14:24:54.892992 | orchestrator | 2025-06-02 14:24:54.893003 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 14:24:54.893014 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:00.066) 0:00:18.561 *********** 2025-06-02 14:24:54.893024 | orchestrator | 2025-06-02 14:24:54.893039 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 14:24:54.893050 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:00.308) 0:00:18.870 *********** 2025-06-02 14:24:54.893061 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:24:54.893072 | orchestrator | 2025-06-02 14:24:54.893082 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 14:24:54.893093 | orchestrator | Monday 02 June 2025 14:22:15 +0000 (0:00:00.210) 0:00:19.081 *********** 2025-06-02 14:24:54.893104 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:24:54.893114 | orchestrator | 2025-06-02 14:24:54.893125 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 14:24:54.893135 | orchestrator | Monday 02 June 2025 14:22:16 +0000 (0:00:00.242) 0:00:19.323 *********** 2025-06-02 14:24:54.893146 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.893157 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:24:54.893167 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:24:54.893178 | orchestrator | 2025-06-02 14:24:54.893189 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 14:24:54.893199 | orchestrator | Monday 02 June 2025 14:23:23 +0000 (0:01:07.826) 0:01:27.149 *********** 2025-06-02 14:24:54.893210 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.893221 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:24:54.893231 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:24:54.893242 | orchestrator | 2025-06-02 14:24:54.893252 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 14:24:54.893263 | orchestrator | Monday 02 June 2025 14:24:43 +0000 (0:01:19.584) 0:02:46.734 *********** 2025-06-02 14:24:54.893281 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:24:54.893292 | orchestrator | 2025-06-02 14:24:54.893303 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 14:24:54.893314 | orchestrator | Monday 02 June 2025 14:24:44 +0000 (0:00:00.586) 0:02:47.321 *********** 2025-06-02 14:24:54.893324 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:24:54.893335 | orchestrator | 2025-06-02 14:24:54.893346 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 14:24:54.893356 | orchestrator | Monday 02 June 2025 14:24:46 +0000 (0:00:02.277) 0:02:49.598 *********** 2025-06-02 14:24:54.893367 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:24:54.893378 | orchestrator | 2025-06-02 14:24:54.893389 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 14:24:54.893399 | orchestrator | Monday 02 June 2025 14:24:48 +0000 (0:00:02.049) 0:02:51.648 *********** 2025-06-02 14:24:54.893410 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.893421 | orchestrator | 2025-06-02 14:24:54.893431 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 14:24:54.893442 | orchestrator | Monday 02 June 2025 14:24:50 +0000 (0:00:02.537) 0:02:54.185 *********** 2025-06-02 14:24:54.893453 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:24:54.893463 | orchestrator | 2025-06-02 14:24:54.893474 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:24:54.893485 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 14:24:54.893497 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 14:24:54.893508 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 14:24:54.893519 | orchestrator | 2025-06-02 14:24:54.893530 | orchestrator | 2025-06-02 14:24:54.893540 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:24:54.893557 | orchestrator | Monday 02 June 2025 14:24:53 +0000 (0:00:02.497) 0:02:56.683 *********** 2025-06-02 14:24:54.893568 | orchestrator | =============================================================================== 2025-06-02 14:24:54.893579 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.58s 2025-06-02 14:24:54.893590 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.83s 2025-06-02 14:24:54.893601 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.54s 2025-06-02 14:24:54.893612 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.54s 2025-06-02 14:24:54.893622 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2025-06-02 14:24:54.893633 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.50s 2025-06-02 14:24:54.893644 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.36s 2025-06-02 14:24:54.893654 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.28s 2025-06-02 14:24:54.893665 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.20s 2025-06-02 14:24:54.893676 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.05s 2025-06-02 14:24:54.893686 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2025-06-02 14:24:54.893697 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.48s 2025-06-02 14:24:54.893708 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2025-06-02 14:24:54.893718 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2025-06-02 14:24:54.893744 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-06-02 14:24:54.893759 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2025-06-02 14:24:54.893770 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-06-02 14:24:54.893781 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-06-02 14:24:54.893791 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.45s 2025-06-02 14:24:54.893802 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-06-02 14:24:57.935937 | orchestrator | 2025-06-02 14:24:57 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:24:57.937666 | orchestrator | 2025-06-02 14:24:57 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:24:57.937700 | orchestrator | 2025-06-02 14:24:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:00.983746 | orchestrator | 2025-06-02 14:25:00 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:25:00.984321 | orchestrator | 2025-06-02 14:25:00 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:00.984357 | orchestrator | 2025-06-02 14:25:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:04.028828 | orchestrator | 2025-06-02 14:25:04 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state STARTED 2025-06-02 14:25:04.030342 | orchestrator | 2025-06-02 14:25:04 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:04.030378 | orchestrator | 2025-06-02 14:25:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:07.074293 | orchestrator | 2025-06-02 14:25:07 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:07.079367 | orchestrator | 2025-06-02 14:25:07.079448 | orchestrator | 2025-06-02 14:25:07.079464 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 14:25:07.079476 | orchestrator | 2025-06-02 14:25:07.079488 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 14:25:07.079499 | orchestrator | Monday 02 June 2025 14:21:56 +0000 (0:00:00.087) 0:00:00.087 *********** 2025-06-02 14:25:07.079510 | orchestrator | ok: [localhost] => { 2025-06-02 14:25:07.079523 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 14:25:07.079534 | orchestrator | } 2025-06-02 14:25:07.079602 | orchestrator | 2025-06-02 14:25:07.079615 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 14:25:07.079626 | orchestrator | Monday 02 June 2025 14:21:57 +0000 (0:00:00.050) 0:00:00.138 *********** 2025-06-02 14:25:07.079637 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 14:25:07.079649 | orchestrator | ...ignoring 2025-06-02 14:25:07.079660 | orchestrator | 2025-06-02 14:25:07.079671 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 14:25:07.079682 | orchestrator | Monday 02 June 2025 14:21:59 +0000 (0:00:02.752) 0:00:02.890 *********** 2025-06-02 14:25:07.079693 | orchestrator | skipping: [localhost] 2025-06-02 14:25:07.079704 | orchestrator | 2025-06-02 14:25:07.079715 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 14:25:07.079726 | orchestrator | Monday 02 June 2025 14:21:59 +0000 (0:00:00.071) 0:00:02.961 *********** 2025-06-02 14:25:07.079737 | orchestrator | ok: [localhost] 2025-06-02 14:25:07.079748 | orchestrator | 2025-06-02 14:25:07.079758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:25:07.079769 | orchestrator | 2025-06-02 14:25:07.079780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:25:07.079816 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:00.158) 0:00:03.120 *********** 2025-06-02 14:25:07.079828 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.079839 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.079849 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.079860 | orchestrator | 2025-06-02 14:25:07.079871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:25:07.079882 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:00.291) 0:00:03.411 *********** 2025-06-02 14:25:07.079893 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 14:25:07.079907 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 14:25:07.079919 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 14:25:07.079932 | orchestrator | 2025-06-02 14:25:07.079945 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 14:25:07.079960 | orchestrator | 2025-06-02 14:25:07.079973 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 14:25:07.080009 | orchestrator | Monday 02 June 2025 14:22:00 +0000 (0:00:00.630) 0:00:04.041 *********** 2025-06-02 14:25:07.080023 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 14:25:07.080035 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 14:25:07.080048 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 14:25:07.080061 | orchestrator | 2025-06-02 14:25:07.080074 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 14:25:07.080088 | orchestrator | Monday 02 June 2025 14:22:01 +0000 (0:00:00.360) 0:00:04.402 *********** 2025-06-02 14:25:07.080101 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:25:07.080115 | orchestrator | 2025-06-02 14:25:07.080128 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 14:25:07.080155 | orchestrator | Monday 02 June 2025 14:22:01 +0000 (0:00:00.498) 0:00:04.901 *********** 2025-06-02 14:25:07.080193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080250 | orchestrator | 2025-06-02 14:25:07.080271 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 14:25:07.080283 | orchestrator | Monday 02 June 2025 14:22:04 +0000 (0:00:02.729) 0:00:07.631 *********** 2025-06-02 14:25:07.080294 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.080306 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.080316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.080327 | orchestrator | 2025-06-02 14:25:07.080346 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 14:25:07.080357 | orchestrator | Monday 02 June 2025 14:22:05 +0000 (0:00:00.822) 0:00:08.453 *********** 2025-06-02 14:25:07.080368 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.080379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.080390 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.080401 | orchestrator | 2025-06-02 14:25:07.080412 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 14:25:07.080423 | orchestrator | Monday 02 June 2025 14:22:06 +0000 (0:00:01.596) 0:00:10.050 *********** 2025-06-02 14:25:07.080435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.080494 | orchestrator | 2025-06-02 14:25:07.080505 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 14:25:07.080516 | orchestrator | Monday 02 June 2025 14:22:11 +0000 (0:00:04.233) 0:00:14.284 *********** 2025-06-02 14:25:07.080527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.080538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.080549 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.080560 | orchestrator | 2025-06-02 14:25:07.080571 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 14:25:07.080582 | orchestrator | Monday 02 June 2025 14:22:12 +0000 (0:00:01.216) 0:00:15.500 *********** 2025-06-02 14:25:07.080593 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.080604 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:25:07.080614 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:25:07.080626 | orchestrator | 2025-06-02 14:25:07.080643 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 14:25:07.080654 | orchestrator | Monday 02 June 2025 14:22:16 +0000 (0:00:04.047) 0:00:19.548 *********** 2025-06-02 14:25:07.080666 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:25:07.080677 | orchestrator | 2025-06-02 14:25:07.080688 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 14:25:07.080699 | orchestrator | Monday 02 June 2025 14:22:16 +0000 (0:00:00.548) 0:00:20.097 *********** 2025-06-02 14:25:07.080719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080738 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.080750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080763 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.080786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.080818 | orchestrator | 2025-06-02 14:25:07.080829 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 14:25:07.080840 | orchestrator | Monday 02 June 2025 14:22:20 +0000 (0:00:03.309) 0:00:23.407 *********** 2025-06-02 14:25:07.080852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080864 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.080886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080905 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.080917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.080929 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.080940 | orchestrator | 2025-06-02 14:25:07.080951 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 14:25:07.080962 | orchestrator | Monday 02 June 2025 14:22:22 +0000 (0:00:02.398) 0:00:25.805 *********** 2025-06-02 14:25:07.080979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.081026 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.081048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.081060 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.081077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 14:25:07.081097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.081108 | orchestrator | 2025-06-02 14:25:07.081119 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 14:25:07.081131 | orchestrator | Monday 02 June 2025 14:22:25 +0000 (0:00:02.551) 0:00:28.356 *********** 2025-06-02 14:25:07.081149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-06-02 14:25:07 | INFO  | Task 58ba6125-a9ef-49f6-8891-d507161be977 is in state SUCCESS 2025-06-02 14:25:07.081164 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.081182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.081209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 14:25:07.081223 | orchestrator | 2025-06-02 14:25:07.081234 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 14:25:07.081245 | orchestrator | Monday 02 June 2025 14:22:28 +0000 (0:00:03.174) 0:00:31.530 *********** 2025-06-02 14:25:07.081256 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:25:07.081267 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.081278 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:25:07.081289 | orchestrator | 2025-06-02 14:25:07.081300 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 14:25:07.081311 | orchestrator | Monday 02 June 2025 14:22:29 +0000 (0:00:01.162) 0:00:32.693 *********** 2025-06-02 14:25:07.081322 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.081333 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.081344 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.081355 | orchestrator | 2025-06-02 14:25:07.081366 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 14:25:07.081377 | orchestrator | Monday 02 June 2025 14:22:30 +0000 (0:00:00.508) 0:00:33.201 *********** 2025-06-02 14:25:07.081388 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.081398 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.081410 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.081420 | orchestrator | 2025-06-02 14:25:07.081432 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 14:25:07.081449 | orchestrator | Monday 02 June 2025 14:22:30 +0000 (0:00:00.431) 0:00:33.633 *********** 2025-06-02 14:25:07.081461 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 14:25:07.081472 | orchestrator | ...ignoring 2025-06-02 14:25:07.081483 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 14:25:07.081494 | orchestrator | ...ignoring 2025-06-02 14:25:07.081510 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 14:25:07.081521 | orchestrator | ...ignoring 2025-06-02 14:25:07.081532 | orchestrator | 2025-06-02 14:25:07.081543 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 14:25:07.081554 | orchestrator | Monday 02 June 2025 14:22:41 +0000 (0:00:11.162) 0:00:44.796 *********** 2025-06-02 14:25:07.081565 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.081576 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.081587 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.081598 | orchestrator | 2025-06-02 14:25:07.081609 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 14:25:07.081620 | orchestrator | Monday 02 June 2025 14:22:42 +0000 (0:00:00.708) 0:00:45.504 *********** 2025-06-02 14:25:07.081631 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.081642 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.081653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.081664 | orchestrator | 2025-06-02 14:25:07.081675 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 14:25:07.081686 | orchestrator | Monday 02 June 2025 14:22:42 +0000 (0:00:00.423) 0:00:45.928 *********** 2025-06-02 14:25:07.081697 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.081708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.081718 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.081729 | orchestrator | 2025-06-02 14:25:07.081746 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 14:25:07.081764 | orchestrator | Monday 02 June 2025 14:22:43 +0000 (0:00:00.408) 0:00:46.337 *********** 2025-06-02 14:25:07.081784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.081803 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.081822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.081840 | orchestrator | 2025-06-02 14:25:07.081859 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 14:25:07.081875 | orchestrator | Monday 02 June 2025 14:22:43 +0000 (0:00:00.415) 0:00:46.752 *********** 2025-06-02 14:25:07.081893 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.081913 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.081933 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.081953 | orchestrator | 2025-06-02 14:25:07.082073 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 14:25:07.082093 | orchestrator | Monday 02 June 2025 14:22:44 +0000 (0:00:00.623) 0:00:47.376 *********** 2025-06-02 14:25:07.082104 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.082115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.082126 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.082136 | orchestrator | 2025-06-02 14:25:07.082147 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 14:25:07.082158 | orchestrator | Monday 02 June 2025 14:22:44 +0000 (0:00:00.427) 0:00:47.803 *********** 2025-06-02 14:25:07.082169 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.082180 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.082191 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 14:25:07.082212 | orchestrator | 2025-06-02 14:25:07.082223 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 14:25:07.082234 | orchestrator | Monday 02 June 2025 14:22:45 +0000 (0:00:00.395) 0:00:48.199 *********** 2025-06-02 14:25:07.082245 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.082256 | orchestrator | 2025-06-02 14:25:07.082267 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 14:25:07.082278 | orchestrator | Monday 02 June 2025 14:22:55 +0000 (0:00:09.951) 0:00:58.150 *********** 2025-06-02 14:25:07.082288 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.082299 | orchestrator | 2025-06-02 14:25:07.082310 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 14:25:07.082321 | orchestrator | Monday 02 June 2025 14:22:55 +0000 (0:00:00.135) 0:00:58.286 *********** 2025-06-02 14:25:07.082332 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.082343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.082354 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.082365 | orchestrator | 2025-06-02 14:25:07.082375 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 14:25:07.082387 | orchestrator | Monday 02 June 2025 14:22:56 +0000 (0:00:01.028) 0:00:59.314 *********** 2025-06-02 14:25:07.082397 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.082408 | orchestrator | 2025-06-02 14:25:07.082419 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 14:25:07.082430 | orchestrator | Monday 02 June 2025 14:23:03 +0000 (0:00:07.637) 0:01:06.952 *********** 2025-06-02 14:25:07.082441 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.082452 | orchestrator | 2025-06-02 14:25:07.082463 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 14:25:07.082474 | orchestrator | Monday 02 June 2025 14:23:05 +0000 (0:00:01.577) 0:01:08.529 *********** 2025-06-02 14:25:07.082485 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.082496 | orchestrator | 2025-06-02 14:25:07.082506 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 14:25:07.082517 | orchestrator | Monday 02 June 2025 14:23:07 +0000 (0:00:02.536) 0:01:11.066 *********** 2025-06-02 14:25:07.082528 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.082539 | orchestrator | 2025-06-02 14:25:07.082550 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 14:25:07.082561 | orchestrator | Monday 02 June 2025 14:23:08 +0000 (0:00:00.127) 0:01:11.194 *********** 2025-06-02 14:25:07.082572 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.082583 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.082594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.082604 | orchestrator | 2025-06-02 14:25:07.082615 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 14:25:07.082626 | orchestrator | Monday 02 June 2025 14:23:08 +0000 (0:00:00.558) 0:01:11.753 *********** 2025-06-02 14:25:07.082637 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.082648 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 14:25:07.082658 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:25:07.082680 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:25:07.082700 | orchestrator | 2025-06-02 14:25:07.082731 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 14:25:07.082751 | orchestrator | skipping: no hosts matched 2025-06-02 14:25:07.082770 | orchestrator | 2025-06-02 14:25:07.082789 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 14:25:07.082807 | orchestrator | 2025-06-02 14:25:07.082823 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 14:25:07.082841 | orchestrator | Monday 02 June 2025 14:23:08 +0000 (0:00:00.340) 0:01:12.094 *********** 2025-06-02 14:25:07.082857 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:25:07.082875 | orchestrator | 2025-06-02 14:25:07.082894 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 14:25:07.082927 | orchestrator | Monday 02 June 2025 14:23:27 +0000 (0:00:18.665) 0:01:30.759 *********** 2025-06-02 14:25:07.082948 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.082967 | orchestrator | 2025-06-02 14:25:07.083006 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 14:25:07.083018 | orchestrator | Monday 02 June 2025 14:23:48 +0000 (0:00:20.580) 0:01:51.340 *********** 2025-06-02 14:25:07.083029 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.083040 | orchestrator | 2025-06-02 14:25:07.083051 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 14:25:07.083061 | orchestrator | 2025-06-02 14:25:07.083072 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 14:25:07.083083 | orchestrator | Monday 02 June 2025 14:23:50 +0000 (0:00:02.506) 0:01:53.847 *********** 2025-06-02 14:25:07.083093 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:25:07.083104 | orchestrator | 2025-06-02 14:25:07.083115 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 14:25:07.083126 | orchestrator | Monday 02 June 2025 14:24:10 +0000 (0:00:19.750) 0:02:13.597 *********** 2025-06-02 14:25:07.083136 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.083147 | orchestrator | 2025-06-02 14:25:07.083158 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 14:25:07.083169 | orchestrator | Monday 02 June 2025 14:24:32 +0000 (0:00:21.589) 0:02:35.187 *********** 2025-06-02 14:25:07.083189 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.083201 | orchestrator | 2025-06-02 14:25:07.083212 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 14:25:07.083223 | orchestrator | 2025-06-02 14:25:07.083234 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 14:25:07.083245 | orchestrator | Monday 02 June 2025 14:24:34 +0000 (0:00:02.751) 0:02:37.938 *********** 2025-06-02 14:25:07.083255 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.083266 | orchestrator | 2025-06-02 14:25:07.083277 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 14:25:07.083288 | orchestrator | Monday 02 June 2025 14:24:45 +0000 (0:00:10.838) 0:02:48.777 *********** 2025-06-02 14:25:07.083299 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.083310 | orchestrator | 2025-06-02 14:25:07.083320 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 14:25:07.083331 | orchestrator | Monday 02 June 2025 14:24:50 +0000 (0:00:04.589) 0:02:53.366 *********** 2025-06-02 14:25:07.083342 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.083353 | orchestrator | 2025-06-02 14:25:07.083364 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 14:25:07.083375 | orchestrator | 2025-06-02 14:25:07.083385 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 14:25:07.083396 | orchestrator | Monday 02 June 2025 14:24:52 +0000 (0:00:02.384) 0:02:55.751 *********** 2025-06-02 14:25:07.083407 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:25:07.083418 | orchestrator | 2025-06-02 14:25:07.083429 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 14:25:07.083440 | orchestrator | Monday 02 June 2025 14:24:53 +0000 (0:00:00.522) 0:02:56.273 *********** 2025-06-02 14:25:07.083451 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.083462 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.083473 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.083484 | orchestrator | 2025-06-02 14:25:07.083494 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 14:25:07.083505 | orchestrator | Monday 02 June 2025 14:24:55 +0000 (0:00:02.343) 0:02:58.617 *********** 2025-06-02 14:25:07.083516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.083527 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.083538 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.083556 | orchestrator | 2025-06-02 14:25:07.083566 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 14:25:07.083577 | orchestrator | Monday 02 June 2025 14:24:57 +0000 (0:00:01.949) 0:03:00.566 *********** 2025-06-02 14:25:07.083588 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.083599 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.083610 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.083621 | orchestrator | 2025-06-02 14:25:07.083632 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 14:25:07.083643 | orchestrator | Monday 02 June 2025 14:24:59 +0000 (0:00:02.056) 0:03:02.623 *********** 2025-06-02 14:25:07.083653 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.083664 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.083675 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:25:07.083686 | orchestrator | 2025-06-02 14:25:07.083697 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 14:25:07.083708 | orchestrator | Monday 02 June 2025 14:25:01 +0000 (0:00:02.066) 0:03:04.690 *********** 2025-06-02 14:25:07.083719 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:25:07.083730 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:25:07.083741 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:25:07.083751 | orchestrator | 2025-06-02 14:25:07.083762 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 14:25:07.083773 | orchestrator | Monday 02 June 2025 14:25:04 +0000 (0:00:02.885) 0:03:07.576 *********** 2025-06-02 14:25:07.083784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:25:07.083795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:25:07.083812 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:25:07.083823 | orchestrator | 2025-06-02 14:25:07.083834 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:25:07.083846 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 14:25:07.083857 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 14:25:07.083870 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 14:25:07.083881 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 14:25:07.083892 | orchestrator | 2025-06-02 14:25:07.083903 | orchestrator | 2025-06-02 14:25:07.083914 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:25:07.083925 | orchestrator | Monday 02 June 2025 14:25:04 +0000 (0:00:00.248) 0:03:07.824 *********** 2025-06-02 14:25:07.083936 | orchestrator | =============================================================================== 2025-06-02 14:25:07.083947 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.17s 2025-06-02 14:25:07.083958 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.42s 2025-06-02 14:25:07.083969 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.16s 2025-06-02 14:25:07.083980 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.84s 2025-06-02 14:25:07.084014 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.95s 2025-06-02 14:25:07.084048 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.64s 2025-06-02 14:25:07.084073 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.26s 2025-06-02 14:25:07.084091 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2025-06-02 14:25:07.084109 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.23s 2025-06-02 14:25:07.084138 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.05s 2025-06-02 14:25:07.084156 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.31s 2025-06-02 14:25:07.084173 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.17s 2025-06-02 14:25:07.084191 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.89s 2025-06-02 14:25:07.084210 | orchestrator | Check MariaDB service --------------------------------------------------- 2.75s 2025-06-02 14:25:07.084230 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.73s 2025-06-02 14:25:07.084248 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.55s 2025-06-02 14:25:07.084266 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-06-02 14:25:07.084277 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.40s 2025-06-02 14:25:07.084288 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.38s 2025-06-02 14:25:07.084299 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.34s 2025-06-02 14:25:07.084310 | orchestrator | 2025-06-02 14:25:07 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:07.084321 | orchestrator | 2025-06-02 14:25:07 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:07.084332 | orchestrator | 2025-06-02 14:25:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:10.138497 | orchestrator | 2025-06-02 14:25:10 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:10.139606 | orchestrator | 2025-06-02 14:25:10 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:10.141141 | orchestrator | 2025-06-02 14:25:10 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:10.141180 | orchestrator | 2025-06-02 14:25:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:13.185464 | orchestrator | 2025-06-02 14:25:13 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:13.187728 | orchestrator | 2025-06-02 14:25:13 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:13.190923 | orchestrator | 2025-06-02 14:25:13 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:13.190957 | orchestrator | 2025-06-02 14:25:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:16.225847 | orchestrator | 2025-06-02 14:25:16 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:16.226333 | orchestrator | 2025-06-02 14:25:16 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:16.227663 | orchestrator | 2025-06-02 14:25:16 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:16.227689 | orchestrator | 2025-06-02 14:25:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:19.266338 | orchestrator | 2025-06-02 14:25:19 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:19.269300 | orchestrator | 2025-06-02 14:25:19 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:19.271416 | orchestrator | 2025-06-02 14:25:19 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:19.271674 | orchestrator | 2025-06-02 14:25:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:22.312390 | orchestrator | 2025-06-02 14:25:22 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:22.313141 | orchestrator | 2025-06-02 14:25:22 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:22.314412 | orchestrator | 2025-06-02 14:25:22 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:22.314441 | orchestrator | 2025-06-02 14:25:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:25.359557 | orchestrator | 2025-06-02 14:25:25 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:25.363228 | orchestrator | 2025-06-02 14:25:25 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:25.364425 | orchestrator | 2025-06-02 14:25:25 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:25.364801 | orchestrator | 2025-06-02 14:25:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:28.404229 | orchestrator | 2025-06-02 14:25:28 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:28.404341 | orchestrator | 2025-06-02 14:25:28 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:28.404908 | orchestrator | 2025-06-02 14:25:28 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:28.404934 | orchestrator | 2025-06-02 14:25:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:31.447680 | orchestrator | 2025-06-02 14:25:31 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:31.448992 | orchestrator | 2025-06-02 14:25:31 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:31.451193 | orchestrator | 2025-06-02 14:25:31 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:31.451228 | orchestrator | 2025-06-02 14:25:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:34.503655 | orchestrator | 2025-06-02 14:25:34 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:34.504785 | orchestrator | 2025-06-02 14:25:34 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:34.506757 | orchestrator | 2025-06-02 14:25:34 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:34.506819 | orchestrator | 2025-06-02 14:25:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:37.546561 | orchestrator | 2025-06-02 14:25:37 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:37.547058 | orchestrator | 2025-06-02 14:25:37 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:37.550152 | orchestrator | 2025-06-02 14:25:37 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:37.550183 | orchestrator | 2025-06-02 14:25:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:40.591076 | orchestrator | 2025-06-02 14:25:40 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:40.594969 | orchestrator | 2025-06-02 14:25:40 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:40.595788 | orchestrator | 2025-06-02 14:25:40 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:40.595818 | orchestrator | 2025-06-02 14:25:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:43.640668 | orchestrator | 2025-06-02 14:25:43 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:43.640796 | orchestrator | 2025-06-02 14:25:43 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:43.640840 | orchestrator | 2025-06-02 14:25:43 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:43.640855 | orchestrator | 2025-06-02 14:25:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:46.691949 | orchestrator | 2025-06-02 14:25:46 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:46.695001 | orchestrator | 2025-06-02 14:25:46 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state STARTED 2025-06-02 14:25:46.696601 | orchestrator | 2025-06-02 14:25:46 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:46.696631 | orchestrator | 2025-06-02 14:25:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:49.754375 | orchestrator | 2025-06-02 14:25:49 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:25:49.754836 | orchestrator | 2025-06-02 14:25:49 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:49.756481 | orchestrator | 2025-06-02 14:25:49 | INFO  | Task 280fb7d9-bbb8-4854-80ff-fd404d0f8ba7 is in state SUCCESS 2025-06-02 14:25:49.759033 | orchestrator | 2025-06-02 14:25:49.759090 | orchestrator | 2025-06-02 14:25:49.759103 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 14:25:49.759116 | orchestrator | 2025-06-02 14:25:49.759127 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 14:25:49.759165 | orchestrator | Monday 02 June 2025 14:23:41 +0000 (0:00:00.574) 0:00:00.574 *********** 2025-06-02 14:25:49.759177 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:25:49.759190 | orchestrator | 2025-06-02 14:25:49.759201 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 14:25:49.759212 | orchestrator | Monday 02 June 2025 14:23:41 +0000 (0:00:00.632) 0:00:01.207 *********** 2025-06-02 14:25:49.759223 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.759235 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.759245 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.759256 | orchestrator | 2025-06-02 14:25:49.759267 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 14:25:49.759278 | orchestrator | Monday 02 June 2025 14:23:42 +0000 (0:00:00.680) 0:00:01.888 *********** 2025-06-02 14:25:49.759289 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.759300 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.759310 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.759321 | orchestrator | 2025-06-02 14:25:49.759332 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 14:25:49.759343 | orchestrator | Monday 02 June 2025 14:23:42 +0000 (0:00:00.305) 0:00:02.193 *********** 2025-06-02 14:25:49.759365 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.759376 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.759387 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.759398 | orchestrator | 2025-06-02 14:25:49.760185 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 14:25:49.760201 | orchestrator | Monday 02 June 2025 14:23:43 +0000 (0:00:00.856) 0:00:03.049 *********** 2025-06-02 14:25:49.760212 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.760223 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.760234 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.760245 | orchestrator | 2025-06-02 14:25:49.760256 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 14:25:49.760266 | orchestrator | Monday 02 June 2025 14:23:43 +0000 (0:00:00.317) 0:00:03.367 *********** 2025-06-02 14:25:49.760277 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.760288 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.760299 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.760310 | orchestrator | 2025-06-02 14:25:49.760321 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 14:25:49.760360 | orchestrator | Monday 02 June 2025 14:23:44 +0000 (0:00:00.311) 0:00:03.679 *********** 2025-06-02 14:25:49.760371 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.760382 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.760393 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.760404 | orchestrator | 2025-06-02 14:25:49.760415 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 14:25:49.760426 | orchestrator | Monday 02 June 2025 14:23:44 +0000 (0:00:00.342) 0:00:04.022 *********** 2025-06-02 14:25:49.760437 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.760449 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.760460 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.760472 | orchestrator | 2025-06-02 14:25:49.760482 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 14:25:49.760493 | orchestrator | Monday 02 June 2025 14:23:45 +0000 (0:00:00.517) 0:00:04.540 *********** 2025-06-02 14:25:49.760504 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.760515 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.760526 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.760537 | orchestrator | 2025-06-02 14:25:49.760547 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 14:25:49.760558 | orchestrator | Monday 02 June 2025 14:23:45 +0000 (0:00:00.310) 0:00:04.850 *********** 2025-06-02 14:25:49.760569 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 14:25:49.760580 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:25:49.760591 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:25:49.760684 | orchestrator | 2025-06-02 14:25:49.760700 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 14:25:49.760727 | orchestrator | Monday 02 June 2025 14:23:45 +0000 (0:00:00.623) 0:00:05.474 *********** 2025-06-02 14:25:49.760738 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.760749 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.760760 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.760770 | orchestrator | 2025-06-02 14:25:49.760781 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 14:25:49.760792 | orchestrator | Monday 02 June 2025 14:23:46 +0000 (0:00:00.458) 0:00:05.933 *********** 2025-06-02 14:25:49.760803 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 14:25:49.760814 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:25:49.760824 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:25:49.760835 | orchestrator | 2025-06-02 14:25:49.760846 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 14:25:49.760856 | orchestrator | Monday 02 June 2025 14:23:48 +0000 (0:00:02.054) 0:00:07.987 *********** 2025-06-02 14:25:49.760868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 14:25:49.760878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 14:25:49.760889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 14:25:49.760900 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.760911 | orchestrator | 2025-06-02 14:25:49.760922 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 14:25:49.760978 | orchestrator | Monday 02 June 2025 14:23:48 +0000 (0:00:00.424) 0:00:08.411 *********** 2025-06-02 14:25:49.760993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761039 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761050 | orchestrator | 2025-06-02 14:25:49.761061 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 14:25:49.761071 | orchestrator | Monday 02 June 2025 14:23:49 +0000 (0:00:00.803) 0:00:09.214 *********** 2025-06-02 14:25:49.761084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.761121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761169 | orchestrator | 2025-06-02 14:25:49.761181 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 14:25:49.761192 | orchestrator | Monday 02 June 2025 14:23:49 +0000 (0:00:00.176) 0:00:09.391 *********** 2025-06-02 14:25:49.761215 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '36edc81cfac4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 14:23:47.094608', 'end': '2025-06-02 14:23:47.146105', 'delta': '0:00:00.051497', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['36edc81cfac4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 14:25:49.761240 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '19f1ceb29cee', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 14:23:47.828295', 'end': '2025-06-02 14:23:47.862347', 'delta': '0:00:00.034052', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['19f1ceb29cee'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 14:25:49.761322 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '894ddd74584d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 14:23:48.330555', 'end': '2025-06-02 14:23:48.376896', 'delta': '0:00:00.046341', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['894ddd74584d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 14:25:49.761361 | orchestrator | 2025-06-02 14:25:49.761379 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 14:25:49.761397 | orchestrator | Monday 02 June 2025 14:23:50 +0000 (0:00:00.392) 0:00:09.784 *********** 2025-06-02 14:25:49.761415 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.761432 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.761450 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.761467 | orchestrator | 2025-06-02 14:25:49.761486 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 14:25:49.761503 | orchestrator | Monday 02 June 2025 14:23:50 +0000 (0:00:00.465) 0:00:10.250 *********** 2025-06-02 14:25:49.761522 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 14:25:49.761533 | orchestrator | 2025-06-02 14:25:49.761544 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 14:25:49.761554 | orchestrator | Monday 02 June 2025 14:23:52 +0000 (0:00:01.681) 0:00:11.931 *********** 2025-06-02 14:25:49.761565 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761576 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.761587 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.761598 | orchestrator | 2025-06-02 14:25:49.761608 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 14:25:49.761619 | orchestrator | Monday 02 June 2025 14:23:52 +0000 (0:00:00.316) 0:00:12.248 *********** 2025-06-02 14:25:49.761630 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761640 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.761651 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.761662 | orchestrator | 2025-06-02 14:25:49.761672 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 14:25:49.761683 | orchestrator | Monday 02 June 2025 14:23:53 +0000 (0:00:00.407) 0:00:12.655 *********** 2025-06-02 14:25:49.761694 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761704 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.761715 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.761726 | orchestrator | 2025-06-02 14:25:49.761737 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 14:25:49.761748 | orchestrator | Monday 02 June 2025 14:23:53 +0000 (0:00:00.505) 0:00:13.161 *********** 2025-06-02 14:25:49.761758 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.761769 | orchestrator | 2025-06-02 14:25:49.761780 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 14:25:49.761790 | orchestrator | Monday 02 June 2025 14:23:53 +0000 (0:00:00.140) 0:00:13.301 *********** 2025-06-02 14:25:49.761801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761812 | orchestrator | 2025-06-02 14:25:49.761822 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 14:25:49.761833 | orchestrator | Monday 02 June 2025 14:23:54 +0000 (0:00:00.247) 0:00:13.548 *********** 2025-06-02 14:25:49.761844 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761855 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.761865 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.761876 | orchestrator | 2025-06-02 14:25:49.761886 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 14:25:49.761966 | orchestrator | Monday 02 June 2025 14:23:54 +0000 (0:00:00.297) 0:00:13.846 *********** 2025-06-02 14:25:49.761986 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.761997 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762008 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762064 | orchestrator | 2025-06-02 14:25:49.762078 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 14:25:49.762096 | orchestrator | Monday 02 June 2025 14:23:54 +0000 (0:00:00.397) 0:00:14.243 *********** 2025-06-02 14:25:49.762108 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762118 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762129 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762198 | orchestrator | 2025-06-02 14:25:49.762210 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 14:25:49.762220 | orchestrator | Monday 02 June 2025 14:23:55 +0000 (0:00:00.521) 0:00:14.764 *********** 2025-06-02 14:25:49.762231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762242 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762263 | orchestrator | 2025-06-02 14:25:49.762274 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 14:25:49.762285 | orchestrator | Monday 02 June 2025 14:23:55 +0000 (0:00:00.321) 0:00:15.086 *********** 2025-06-02 14:25:49.762296 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762330 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762351 | orchestrator | 2025-06-02 14:25:49.762362 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 14:25:49.762373 | orchestrator | Monday 02 June 2025 14:23:55 +0000 (0:00:00.344) 0:00:15.431 *********** 2025-06-02 14:25:49.762384 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762395 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762406 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762417 | orchestrator | 2025-06-02 14:25:49.762428 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 14:25:49.762489 | orchestrator | Monday 02 June 2025 14:23:56 +0000 (0:00:00.311) 0:00:15.742 *********** 2025-06-02 14:25:49.762502 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762513 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.762524 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.762534 | orchestrator | 2025-06-02 14:25:49.762545 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 14:25:49.762556 | orchestrator | Monday 02 June 2025 14:23:56 +0000 (0:00:00.548) 0:00:16.291 *********** 2025-06-02 14:25:49.762568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2', 'dm-uuid-LVM-PRcTXFVMD2J9y2msp1jLbP8Tnzjv1PZVW7vY9gu7hRhzOlXXC6Y4BJjIOwreghe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7', 'dm-uuid-LVM-0DHQdMENg10onuP1gilf8HJ18ewp3PYPu7xdXLMFVyJjPsrnSMt5DptLsvyQSKuq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.762758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HeXiKj-Y2ur-EJzQ-DSWO-DbOw-90BR-diQB6B', 'scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf', 'scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.762797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klRX02-oPol-DcMk-qROk-Spg4-9fo7-Bn1a3b', 'scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e', 'scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.762810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8', 'scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.762821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10', 'dm-uuid-LVM-aXsMYsQIG8ipRI6F2Ecf6r6twXfyZeU7xIZbpf6RWajeJPlgDWFTHlsGQKjWz1LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.762848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d', 'dm-uuid-LVM-kHfeidgHrXTbvPvXcWUbj91hl0Z4ABGq6i0Mp9siSSBfn9jcs9Wo6Ju11kKwZRP6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762920 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.762931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.762977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Dhy17-rLof-5atV-hb51-G5xb-ipkX-5N8jtU', 'scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7', 'scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d', 'dm-uuid-LVM-ArZCk8LA2tgmTNdcy1sxqx9AkNK4pZELH7EpPioFIlc0i0NnKMTWiIR6eimZUHba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lujp7B-oHJI-oyfJ-cKSB-z2fw-TJNM-IYwucw', 'scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b', 'scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064', 'dm-uuid-LVM-LYlgOOuwskw0FRxuwd5epNvmykOdYzYqPGwfzPfdt4v7TSbe2xrqDaw8ZlBsHExx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857', 'scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763110 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.763120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 14:25:49.763257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bCTR9v-UPvn-niQy-9m0V-qAJW-1Wfw-HfxNC2', 'scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb', 'scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPpjBi-Y6G1-qLzp-1TWE-7LY8-B2hS-h657E1', 'scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000', 'scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0', 'scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 14:25:49.763329 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.763340 | orchestrator | 2025-06-02 14:25:49.763350 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 14:25:49.763366 | orchestrator | Monday 02 June 2025 14:23:57 +0000 (0:00:00.583) 0:00:16.874 *********** 2025-06-02 14:25:49.763376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2', 'dm-uuid-LVM-PRcTXFVMD2J9y2msp1jLbP8Tnzjv1PZVW7vY9gu7hRhzOlXXC6Y4BJjIOwreghe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7', 'dm-uuid-LVM-0DHQdMENg10onuP1gilf8HJ18ewp3PYPu7xdXLMFVyJjPsrnSMt5DptLsvyQSKuq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763412 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_430962b1-bfac-488d-a447-b0298874a3fa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10', 'dm-uuid-LVM-aXsMYsQIG8ipRI6F2Ecf6r6twXfyZeU7xIZbpf6RWajeJPlgDWFTHlsGQKjWz1LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--999978ba--f5e8--5970--b49f--3220d15259a2-osd--block--999978ba--f5e8--5970--b49f--3220d15259a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HeXiKj-Y2ur-EJzQ-DSWO-DbOw-90BR-diQB6B', 'scsi-0QEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf', 'scsi-SQEMU_QEMU_HARDDISK_fa9eac55-b7ba-400b-ad39-8d51d062dfbf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d', 'dm-uuid-LVM-kHfeidgHrXTbvPvXcWUbj91hl0Z4ABGq6i0Mp9siSSBfn9jcs9Wo6Ju11kKwZRP6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4eaa56f6--1bb5--52f9--9765--bc2816f621f7-osd--block--4eaa56f6--1bb5--52f9--9765--bc2816f621f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klRX02-oPol-DcMk-qROk-Spg4-9fo7-Bn1a3b', 'scsi-0QEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e', 'scsi-SQEMU_QEMU_HARDDISK_dc6882bf-da04-4edd-9882-73e1f985245e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8', 'scsi-SQEMU_QEMU_HARDDISK_efdd6e96-769c-48d5-86b4-ee9af75744a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.763680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763711 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16', 'scsi-SQEMU_QEMU_HARDDISK_76bfcd68-93f4-43fc-a7a6-b1d272437959-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d', 'dm-uuid-LVM-ArZCk8LA2tgmTNdcy1sxqx9AkNK4pZELH7EpPioFIlc0i0NnKMTWiIR6eimZUHba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10-osd--block--a3b854b8--87a4--5f9e--b4c6--d99e1c5dbb10'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6Dhy17-rLof-5atV-hb51-G5xb-ipkX-5N8jtU', 'scsi-0QEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7', 'scsi-SQEMU_QEMU_HARDDISK_d9b7d288-6907-4dde-a5ec-8795086443a7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064', 'dm-uuid-LVM-LYlgOOuwskw0FRxuwd5epNvmykOdYzYqPGwfzPfdt4v7TSbe2xrqDaw8ZlBsHExx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bbf0c471--2dcf--5556--af63--e058f1325c4d-osd--block--bbf0c471--2dcf--5556--af63--e058f1325c4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lujp7B-oHJI-oyfJ-cKSB-z2fw-TJNM-IYwucw', 'scsi-0QEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b', 'scsi-SQEMU_QEMU_HARDDISK_3f8f7a8e-6ae0-4f67-bdef-3fe5e1007e1b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857', 'scsi-SQEMU_QEMU_HARDDISK_58632b91-4ff4-425f-9799-2cbdbd75f857'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763841 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.763861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16', 'scsi-SQEMU_QEMU_HARDDISK_b74c4224-3b45-4fa7-a33d-9e64f92a9cf7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d-osd--block--1475bed6--7ba6--5e8e--8ce2--217cc0c6359d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bCTR9v-UPvn-niQy-9m0V-qAJW-1Wfw-HfxNC2', 'scsi-0QEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb', 'scsi-SQEMU_QEMU_HARDDISK_f20c7008-f12c-46ab-b284-b84010eb63eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c542c38e--2fd0--548c--8c9f--0ca498087064-osd--block--c542c38e--2fd0--548c--8c9f--0ca498087064'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPpjBi-Y6G1-qLzp-1TWE-7LY8-B2hS-h657E1', 'scsi-0QEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000', 'scsi-SQEMU_QEMU_HARDDISK_456d640a-c6eb-4569-8c8e-a4a3fdd3e000'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.763988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0', 'scsi-SQEMU_QEMU_HARDDISK_23117054-a818-47a4-b6cc-218c8fcf9ce0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.764010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 14:25:49.764021 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764031 | orchestrator | 2025-06-02 14:25:49.764041 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 14:25:49.764051 | orchestrator | Monday 02 June 2025 14:23:57 +0000 (0:00:00.576) 0:00:17.451 *********** 2025-06-02 14:25:49.764060 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.764070 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.764080 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.764089 | orchestrator | 2025-06-02 14:25:49.764099 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 14:25:49.764108 | orchestrator | Monday 02 June 2025 14:23:58 +0000 (0:00:00.717) 0:00:18.168 *********** 2025-06-02 14:25:49.764118 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.764127 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.764251 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.764264 | orchestrator | 2025-06-02 14:25:49.764273 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 14:25:49.764283 | orchestrator | Monday 02 June 2025 14:23:59 +0000 (0:00:00.519) 0:00:18.688 *********** 2025-06-02 14:25:49.764293 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.764302 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.764312 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.764321 | orchestrator | 2025-06-02 14:25:49.764331 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 14:25:49.764341 | orchestrator | Monday 02 June 2025 14:23:59 +0000 (0:00:00.656) 0:00:19.345 *********** 2025-06-02 14:25:49.764350 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764360 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764370 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764380 | orchestrator | 2025-06-02 14:25:49.764389 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 14:25:49.764399 | orchestrator | Monday 02 June 2025 14:24:00 +0000 (0:00:00.304) 0:00:19.650 *********** 2025-06-02 14:25:49.764408 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764418 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764428 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764437 | orchestrator | 2025-06-02 14:25:49.764447 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 14:25:49.764456 | orchestrator | Monday 02 June 2025 14:24:00 +0000 (0:00:00.428) 0:00:20.078 *********** 2025-06-02 14:25:49.764466 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764484 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764494 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764503 | orchestrator | 2025-06-02 14:25:49.764513 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 14:25:49.764522 | orchestrator | Monday 02 June 2025 14:24:01 +0000 (0:00:00.627) 0:00:20.705 *********** 2025-06-02 14:25:49.764532 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 14:25:49.764542 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 14:25:49.764551 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 14:25:49.764561 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 14:25:49.764570 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 14:25:49.764580 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 14:25:49.764590 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 14:25:49.764599 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 14:25:49.764609 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 14:25:49.764618 | orchestrator | 2025-06-02 14:25:49.764628 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 14:25:49.764637 | orchestrator | Monday 02 June 2025 14:24:02 +0000 (0:00:00.937) 0:00:21.643 *********** 2025-06-02 14:25:49.764647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 14:25:49.764657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 14:25:49.764667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 14:25:49.764676 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 14:25:49.764701 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 14:25:49.764710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 14:25:49.764718 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 14:25:49.764734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 14:25:49.764742 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 14:25:49.764750 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764758 | orchestrator | 2025-06-02 14:25:49.764766 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 14:25:49.764774 | orchestrator | Monday 02 June 2025 14:24:02 +0000 (0:00:00.340) 0:00:21.983 *********** 2025-06-02 14:25:49.764782 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:25:49.764790 | orchestrator | 2025-06-02 14:25:49.764798 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 14:25:49.764807 | orchestrator | Monday 02 June 2025 14:24:03 +0000 (0:00:00.731) 0:00:22.714 *********** 2025-06-02 14:25:49.764815 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764822 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764830 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764838 | orchestrator | 2025-06-02 14:25:49.764852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 14:25:49.764860 | orchestrator | Monday 02 June 2025 14:24:03 +0000 (0:00:00.340) 0:00:23.055 *********** 2025-06-02 14:25:49.764868 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764876 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764884 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764892 | orchestrator | 2025-06-02 14:25:49.764900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 14:25:49.764907 | orchestrator | Monday 02 June 2025 14:24:03 +0000 (0:00:00.300) 0:00:23.355 *********** 2025-06-02 14:25:49.764915 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.764928 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.764936 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:25:49.764944 | orchestrator | 2025-06-02 14:25:49.764952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 14:25:49.764960 | orchestrator | Monday 02 June 2025 14:24:04 +0000 (0:00:00.338) 0:00:23.694 *********** 2025-06-02 14:25:49.764968 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.764976 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.764983 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.764991 | orchestrator | 2025-06-02 14:25:49.764999 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 14:25:49.765007 | orchestrator | Monday 02 June 2025 14:24:04 +0000 (0:00:00.586) 0:00:24.281 *********** 2025-06-02 14:25:49.765015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:25:49.765023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:25:49.765031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:25:49.765039 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.765047 | orchestrator | 2025-06-02 14:25:49.765055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 14:25:49.765063 | orchestrator | Monday 02 June 2025 14:24:05 +0000 (0:00:00.377) 0:00:24.658 *********** 2025-06-02 14:25:49.765071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:25:49.765079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:25:49.765087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:25:49.765095 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.765103 | orchestrator | 2025-06-02 14:25:49.765111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 14:25:49.765118 | orchestrator | Monday 02 June 2025 14:24:05 +0000 (0:00:00.356) 0:00:25.015 *********** 2025-06-02 14:25:49.765126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 14:25:49.765149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 14:25:49.765157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 14:25:49.765165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.765173 | orchestrator | 2025-06-02 14:25:49.765181 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 14:25:49.765189 | orchestrator | Monday 02 June 2025 14:24:05 +0000 (0:00:00.375) 0:00:25.390 *********** 2025-06-02 14:25:49.765197 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:25:49.765204 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:25:49.765212 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:25:49.765220 | orchestrator | 2025-06-02 14:25:49.765228 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 14:25:49.765236 | orchestrator | Monday 02 June 2025 14:24:06 +0000 (0:00:00.332) 0:00:25.723 *********** 2025-06-02 14:25:49.765244 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 14:25:49.765252 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 14:25:49.765260 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 14:25:49.765267 | orchestrator | 2025-06-02 14:25:49.765275 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 14:25:49.765283 | orchestrator | Monday 02 June 2025 14:24:06 +0000 (0:00:00.511) 0:00:26.234 *********** 2025-06-02 14:25:49.765291 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 14:25:49.765299 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:25:49.765307 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:25:49.765319 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 14:25:49.765327 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 14:25:49.765343 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 14:25:49.765351 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 14:25:49.765358 | orchestrator | 2025-06-02 14:25:49.765366 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 14:25:49.765374 | orchestrator | Monday 02 June 2025 14:24:07 +0000 (0:00:00.980) 0:00:27.214 *********** 2025-06-02 14:25:49.765382 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 14:25:49.765390 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 14:25:49.765398 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 14:25:49.765406 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 14:25:49.765414 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 14:25:49.765421 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 14:25:49.765429 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 14:25:49.765437 | orchestrator | 2025-06-02 14:25:49.765449 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 14:25:49.765457 | orchestrator | Monday 02 June 2025 14:24:09 +0000 (0:00:01.912) 0:00:29.127 *********** 2025-06-02 14:25:49.765465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:25:49.765473 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:25:49.765481 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 14:25:49.765489 | orchestrator | 2025-06-02 14:25:49.765497 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 14:25:49.765505 | orchestrator | Monday 02 June 2025 14:24:10 +0000 (0:00:00.401) 0:00:29.528 *********** 2025-06-02 14:25:49.765514 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:25:49.765523 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:25:49.765531 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:25:49.765539 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:25:49.765547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 14:25:49.765555 | orchestrator | 2025-06-02 14:25:49.765563 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 14:25:49.765572 | orchestrator | Monday 02 June 2025 14:24:55 +0000 (0:00:45.070) 0:01:14.599 *********** 2025-06-02 14:25:49.765579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765587 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765600 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765616 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765632 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 14:25:49.765640 | orchestrator | 2025-06-02 14:25:49.765648 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 14:25:49.765656 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:23.335) 0:01:37.935 *********** 2025-06-02 14:25:49.765664 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765672 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765691 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765699 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765707 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765715 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 14:25:49.765723 | orchestrator | 2025-06-02 14:25:49.765731 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 14:25:49.765739 | orchestrator | Monday 02 June 2025 14:25:30 +0000 (0:00:11.688) 0:01:49.623 *********** 2025-06-02 14:25:49.765747 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765755 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765763 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765779 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765787 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765799 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765807 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765815 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765823 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765831 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765839 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765847 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765855 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765863 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 14:25:49.765879 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 14:25:49.765887 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 14:25:49.765894 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 14:25:49.765902 | orchestrator | 2025-06-02 14:25:49.765910 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:25:49.765923 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 14:25:49.765932 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 14:25:49.765940 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 14:25:49.765948 | orchestrator | 2025-06-02 14:25:49.765956 | orchestrator | 2025-06-02 14:25:49.765964 | orchestrator | 2025-06-02 14:25:49.765972 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:25:49.765980 | orchestrator | Monday 02 June 2025 14:25:47 +0000 (0:00:17.073) 0:02:06.697 *********** 2025-06-02 14:25:49.765988 | orchestrator | =============================================================================== 2025-06-02 14:25:49.765996 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.07s 2025-06-02 14:25:49.766004 | orchestrator | generate keys ---------------------------------------------------------- 23.34s 2025-06-02 14:25:49.766011 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.07s 2025-06-02 14:25:49.766064 | orchestrator | get keys from monitors ------------------------------------------------- 11.69s 2025-06-02 14:25:49.766073 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.06s 2025-06-02 14:25:49.766081 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.91s 2025-06-02 14:25:49.766089 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2025-06-02 14:25:49.766097 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-06-02 14:25:49.766104 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.94s 2025-06-02 14:25:49.766112 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2025-06-02 14:25:49.766120 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-06-02 14:25:49.766128 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-06-02 14:25:49.766151 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-06-02 14:25:49.766159 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-06-02 14:25:49.766171 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-06-02 14:25:49.766179 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2025-06-02 14:25:49.766187 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.63s 2025-06-02 14:25:49.766195 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-06-02 14:25:49.766203 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-06-02 14:25:49.766211 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2025-06-02 14:25:49.766219 | orchestrator | 2025-06-02 14:25:49 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:49.766227 | orchestrator | 2025-06-02 14:25:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:52.804980 | orchestrator | 2025-06-02 14:25:52 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:25:52.806351 | orchestrator | 2025-06-02 14:25:52 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:52.808182 | orchestrator | 2025-06-02 14:25:52 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:52.808264 | orchestrator | 2025-06-02 14:25:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:55.853439 | orchestrator | 2025-06-02 14:25:55 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:25:55.855055 | orchestrator | 2025-06-02 14:25:55 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:55.857269 | orchestrator | 2025-06-02 14:25:55 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:55.857302 | orchestrator | 2025-06-02 14:25:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:25:58.903668 | orchestrator | 2025-06-02 14:25:58 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:25:58.907706 | orchestrator | 2025-06-02 14:25:58 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:25:58.909211 | orchestrator | 2025-06-02 14:25:58 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:25:58.910128 | orchestrator | 2025-06-02 14:25:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:01.967929 | orchestrator | 2025-06-02 14:26:01 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:26:01.968948 | orchestrator | 2025-06-02 14:26:01 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:01.971072 | orchestrator | 2025-06-02 14:26:01 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:01.971452 | orchestrator | 2025-06-02 14:26:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:05.025626 | orchestrator | 2025-06-02 14:26:05 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:26:05.028586 | orchestrator | 2025-06-02 14:26:05 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:05.030750 | orchestrator | 2025-06-02 14:26:05 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:05.031260 | orchestrator | 2025-06-02 14:26:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:08.078712 | orchestrator | 2025-06-02 14:26:08 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:26:08.079649 | orchestrator | 2025-06-02 14:26:08 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:08.080938 | orchestrator | 2025-06-02 14:26:08 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:08.081114 | orchestrator | 2025-06-02 14:26:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:11.123083 | orchestrator | 2025-06-02 14:26:11 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:26:11.124309 | orchestrator | 2025-06-02 14:26:11 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:11.125188 | orchestrator | 2025-06-02 14:26:11 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:11.125244 | orchestrator | 2025-06-02 14:26:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:14.170693 | orchestrator | 2025-06-02 14:26:14 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state STARTED 2025-06-02 14:26:14.171081 | orchestrator | 2025-06-02 14:26:14 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:14.173104 | orchestrator | 2025-06-02 14:26:14 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:14.173195 | orchestrator | 2025-06-02 14:26:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:17.218394 | orchestrator | 2025-06-02 14:26:17 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:17.219325 | orchestrator | 2025-06-02 14:26:17 | INFO  | Task 6ea4d6a1-38df-46ea-b97c-b99e342c1db7 is in state SUCCESS 2025-06-02 14:26:17.221210 | orchestrator | 2025-06-02 14:26:17 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:17.222995 | orchestrator | 2025-06-02 14:26:17 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:17.223315 | orchestrator | 2025-06-02 14:26:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:20.272213 | orchestrator | 2025-06-02 14:26:20 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:20.274422 | orchestrator | 2025-06-02 14:26:20 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:20.276461 | orchestrator | 2025-06-02 14:26:20 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:20.276495 | orchestrator | 2025-06-02 14:26:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:23.330764 | orchestrator | 2025-06-02 14:26:23 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:23.332018 | orchestrator | 2025-06-02 14:26:23 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:23.335040 | orchestrator | 2025-06-02 14:26:23 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:23.335329 | orchestrator | 2025-06-02 14:26:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:26.374341 | orchestrator | 2025-06-02 14:26:26 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:26.376578 | orchestrator | 2025-06-02 14:26:26 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:26.378970 | orchestrator | 2025-06-02 14:26:26 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:26.379017 | orchestrator | 2025-06-02 14:26:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:29.423425 | orchestrator | 2025-06-02 14:26:29 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:29.425352 | orchestrator | 2025-06-02 14:26:29 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:29.426763 | orchestrator | 2025-06-02 14:26:29 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:29.427015 | orchestrator | 2025-06-02 14:26:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:32.468635 | orchestrator | 2025-06-02 14:26:32 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:32.469971 | orchestrator | 2025-06-02 14:26:32 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:32.472655 | orchestrator | 2025-06-02 14:26:32 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:32.472729 | orchestrator | 2025-06-02 14:26:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:35.521443 | orchestrator | 2025-06-02 14:26:35 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:35.522490 | orchestrator | 2025-06-02 14:26:35 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:35.523463 | orchestrator | 2025-06-02 14:26:35 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:35.523497 | orchestrator | 2025-06-02 14:26:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:38.566276 | orchestrator | 2025-06-02 14:26:38 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:38.566861 | orchestrator | 2025-06-02 14:26:38 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:38.568668 | orchestrator | 2025-06-02 14:26:38 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:38.568695 | orchestrator | 2025-06-02 14:26:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:41.611647 | orchestrator | 2025-06-02 14:26:41 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:41.618614 | orchestrator | 2025-06-02 14:26:41 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:41.618662 | orchestrator | 2025-06-02 14:26:41 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:41.618671 | orchestrator | 2025-06-02 14:26:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:44.657410 | orchestrator | 2025-06-02 14:26:44 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:44.662721 | orchestrator | 2025-06-02 14:26:44 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:44.664154 | orchestrator | 2025-06-02 14:26:44 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:44.664179 | orchestrator | 2025-06-02 14:26:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:47.713906 | orchestrator | 2025-06-02 14:26:47 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:47.715285 | orchestrator | 2025-06-02 14:26:47 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state STARTED 2025-06-02 14:26:47.717003 | orchestrator | 2025-06-02 14:26:47 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:47.717029 | orchestrator | 2025-06-02 14:26:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:50.766825 | orchestrator | 2025-06-02 14:26:50 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:50.770666 | orchestrator | 2025-06-02 14:26:50 | INFO  | Task 5d1172c5-fe79-46d0-8088-c44728b69854 is in state SUCCESS 2025-06-02 14:26:50.772122 | orchestrator | 2025-06-02 14:26:50.772158 | orchestrator | 2025-06-02 14:26:50.772171 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 14:26:50.772183 | orchestrator | 2025-06-02 14:26:50.772195 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 14:26:50.772206 | orchestrator | Monday 02 June 2025 14:25:51 +0000 (0:00:00.112) 0:00:00.112 *********** 2025-06-02 14:26:50.772218 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 14:26:50.772230 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772241 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772252 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:26:50.772263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772274 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 14:26:50.772285 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 14:26:50.772296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 14:26:50.772307 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 14:26:50.772404 | orchestrator | 2025-06-02 14:26:50.772419 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 14:26:50.772430 | orchestrator | Monday 02 June 2025 14:25:55 +0000 (0:00:04.089) 0:00:04.201 *********** 2025-06-02 14:26:50.772441 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 14:26:50.772453 | orchestrator | 2025-06-02 14:26:50.772464 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 14:26:50.772474 | orchestrator | Monday 02 June 2025 14:25:56 +0000 (0:00:00.831) 0:00:05.032 *********** 2025-06-02 14:26:50.772486 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 14:26:50.772497 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772508 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772518 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:26:50.772529 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772540 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 14:26:50.772550 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 14:26:50.772561 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 14:26:50.772804 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 14:26:50.772820 | orchestrator | 2025-06-02 14:26:50.772831 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 14:26:50.772842 | orchestrator | Monday 02 June 2025 14:26:09 +0000 (0:00:12.867) 0:00:17.900 *********** 2025-06-02 14:26:50.772853 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 14:26:50.772878 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772890 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772901 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:26:50.772912 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 14:26:50.772923 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 14:26:50.772933 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 14:26:50.772944 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 14:26:50.772955 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 14:26:50.772966 | orchestrator | 2025-06-02 14:26:50.772977 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:26:50.772988 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:26:50.773000 | orchestrator | 2025-06-02 14:26:50.773010 | orchestrator | 2025-06-02 14:26:50.773021 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:26:50.773032 | orchestrator | Monday 02 June 2025 14:26:15 +0000 (0:00:06.726) 0:00:24.627 *********** 2025-06-02 14:26:50.773043 | orchestrator | =============================================================================== 2025-06-02 14:26:50.773054 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.87s 2025-06-02 14:26:50.773064 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.73s 2025-06-02 14:26:50.773075 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2025-06-02 14:26:50.773086 | orchestrator | Create share directory -------------------------------------------------- 0.83s 2025-06-02 14:26:50.773097 | orchestrator | 2025-06-02 14:26:50.773243 | orchestrator | 2025-06-02 14:26:50.773260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:26:50.773282 | orchestrator | 2025-06-02 14:26:50.773305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:26:50.773317 | orchestrator | Monday 02 June 2025 14:25:09 +0000 (0:00:00.250) 0:00:00.250 *********** 2025-06-02 14:26:50.773328 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.773365 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.773377 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.773388 | orchestrator | 2025-06-02 14:26:50.773399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:26:50.773409 | orchestrator | Monday 02 June 2025 14:25:09 +0000 (0:00:00.280) 0:00:00.531 *********** 2025-06-02 14:26:50.773420 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 14:26:50.773432 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 14:26:50.773442 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 14:26:50.773453 | orchestrator | 2025-06-02 14:26:50.773464 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 14:26:50.773475 | orchestrator | 2025-06-02 14:26:50.773485 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 14:26:50.773496 | orchestrator | Monday 02 June 2025 14:25:09 +0000 (0:00:00.438) 0:00:00.969 *********** 2025-06-02 14:26:50.773507 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:26:50.773519 | orchestrator | 2025-06-02 14:26:50.773530 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 14:26:50.773540 | orchestrator | Monday 02 June 2025 14:25:10 +0000 (0:00:00.495) 0:00:01.465 *********** 2025-06-02 14:26:50.773566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.773599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.773627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.773647 | orchestrator | 2025-06-02 14:26:50.773659 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 14:26:50.773670 | orchestrator | Monday 02 June 2025 14:25:11 +0000 (0:00:01.168) 0:00:02.633 *********** 2025-06-02 14:26:50.773681 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.773691 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.773703 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.773713 | orchestrator | 2025-06-02 14:26:50.773724 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 14:26:50.773735 | orchestrator | Monday 02 June 2025 14:25:11 +0000 (0:00:00.461) 0:00:03.095 *********** 2025-06-02 14:26:50.773746 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 14:26:50.773756 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 14:26:50.773774 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 14:26:50.773785 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 14:26:50.773797 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 14:26:50.773807 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 14:26:50.773818 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 14:26:50.773829 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 14:26:50.773840 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 14:26:50.773850 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 14:26:50.773861 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 14:26:50.773872 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 14:26:50.773883 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 14:26:50.773895 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 14:26:50.773908 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 14:26:50.773920 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 14:26:50.773932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 14:26:50.773945 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 14:26:50.773958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 14:26:50.773970 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 14:26:50.773983 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 14:26:50.773994 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 14:26:50.774006 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 14:26:50.774066 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 14:26:50.774083 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 14:26:50.774097 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 14:26:50.774108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 14:26:50.774127 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 14:26:50.774144 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 14:26:50.774155 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 14:26:50.774166 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 14:26:50.774177 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 14:26:50.774188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 14:26:50.774199 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 14:26:50.774210 | orchestrator | 2025-06-02 14:26:50.774221 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.774232 | orchestrator | Monday 02 June 2025 14:25:12 +0000 (0:00:00.760) 0:00:03.855 *********** 2025-06-02 14:26:50.774243 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.774254 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.774264 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.774275 | orchestrator | 2025-06-02 14:26:50.774286 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.774297 | orchestrator | Monday 02 June 2025 14:25:12 +0000 (0:00:00.297) 0:00:04.153 *********** 2025-06-02 14:26:50.774308 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774319 | orchestrator | 2025-06-02 14:26:50.774330 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.774378 | orchestrator | Monday 02 June 2025 14:25:13 +0000 (0:00:00.135) 0:00:04.288 *********** 2025-06-02 14:26:50.774397 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774409 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.774420 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.774431 | orchestrator | 2025-06-02 14:26:50.774441 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.774452 | orchestrator | Monday 02 June 2025 14:25:13 +0000 (0:00:00.474) 0:00:04.763 *********** 2025-06-02 14:26:50.774463 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.774474 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.774484 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.774495 | orchestrator | 2025-06-02 14:26:50.774506 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.774517 | orchestrator | Monday 02 June 2025 14:25:13 +0000 (0:00:00.311) 0:00:05.074 *********** 2025-06-02 14:26:50.774528 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774538 | orchestrator | 2025-06-02 14:26:50.774549 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.774560 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.136) 0:00:05.210 *********** 2025-06-02 14:26:50.774571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774582 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.774592 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.774603 | orchestrator | 2025-06-02 14:26:50.774614 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.774625 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.287) 0:00:05.498 *********** 2025-06-02 14:26:50.774635 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.774655 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.774666 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.774677 | orchestrator | 2025-06-02 14:26:50.774688 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.774699 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.303) 0:00:05.802 *********** 2025-06-02 14:26:50.774710 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774720 | orchestrator | 2025-06-02 14:26:50.774731 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.774742 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.375) 0:00:06.178 *********** 2025-06-02 14:26:50.774752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774763 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.774774 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.774785 | orchestrator | 2025-06-02 14:26:50.774795 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.774806 | orchestrator | Monday 02 June 2025 14:25:15 +0000 (0:00:00.321) 0:00:06.499 *********** 2025-06-02 14:26:50.774816 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.774827 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.774838 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.774849 | orchestrator | 2025-06-02 14:26:50.774859 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.774873 | orchestrator | Monday 02 June 2025 14:25:15 +0000 (0:00:00.329) 0:00:06.829 *********** 2025-06-02 14:26:50.774891 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774909 | orchestrator | 2025-06-02 14:26:50.774928 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.774946 | orchestrator | Monday 02 June 2025 14:25:15 +0000 (0:00:00.133) 0:00:06.962 *********** 2025-06-02 14:26:50.774964 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.774976 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.774986 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.774997 | orchestrator | 2025-06-02 14:26:50.775008 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775019 | orchestrator | Monday 02 June 2025 14:25:16 +0000 (0:00:00.284) 0:00:07.247 *********** 2025-06-02 14:26:50.775037 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775057 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.775071 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.775082 | orchestrator | 2025-06-02 14:26:50.775098 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.775110 | orchestrator | Monday 02 June 2025 14:25:16 +0000 (0:00:00.490) 0:00:07.738 *********** 2025-06-02 14:26:50.775120 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775131 | orchestrator | 2025-06-02 14:26:50.775142 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.775153 | orchestrator | Monday 02 June 2025 14:25:16 +0000 (0:00:00.166) 0:00:07.904 *********** 2025-06-02 14:26:50.775164 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775174 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.775185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.775195 | orchestrator | 2025-06-02 14:26:50.775207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775217 | orchestrator | Monday 02 June 2025 14:25:16 +0000 (0:00:00.293) 0:00:08.198 *********** 2025-06-02 14:26:50.775228 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775239 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.775249 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.775260 | orchestrator | 2025-06-02 14:26:50.775271 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.775282 | orchestrator | Monday 02 June 2025 14:25:17 +0000 (0:00:00.294) 0:00:08.492 *********** 2025-06-02 14:26:50.775293 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775311 | orchestrator | 2025-06-02 14:26:50.775322 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.775369 | orchestrator | Monday 02 June 2025 14:25:17 +0000 (0:00:00.151) 0:00:08.644 *********** 2025-06-02 14:26:50.775382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775393 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.775404 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.775414 | orchestrator | 2025-06-02 14:26:50.775425 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775436 | orchestrator | Monday 02 June 2025 14:25:17 +0000 (0:00:00.521) 0:00:09.165 *********** 2025-06-02 14:26:50.775446 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775457 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.775467 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.775478 | orchestrator | 2025-06-02 14:26:50.775496 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.775508 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:00.374) 0:00:09.540 *********** 2025-06-02 14:26:50.775519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775529 | orchestrator | 2025-06-02 14:26:50.775540 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.775551 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:00.210) 0:00:09.750 *********** 2025-06-02 14:26:50.775562 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775573 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.775584 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.775594 | orchestrator | 2025-06-02 14:26:50.775605 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775616 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:00.384) 0:00:10.135 *********** 2025-06-02 14:26:50.775627 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775637 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.775648 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.775659 | orchestrator | 2025-06-02 14:26:50.775670 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.775681 | orchestrator | Monday 02 June 2025 14:25:19 +0000 (0:00:00.332) 0:00:10.467 *********** 2025-06-02 14:26:50.775692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775703 | orchestrator | 2025-06-02 14:26:50.775714 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.775725 | orchestrator | Monday 02 June 2025 14:25:19 +0000 (0:00:00.124) 0:00:10.592 *********** 2025-06-02 14:26:50.775735 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775746 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.775757 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.775768 | orchestrator | 2025-06-02 14:26:50.775779 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775790 | orchestrator | Monday 02 June 2025 14:25:19 +0000 (0:00:00.556) 0:00:11.149 *********** 2025-06-02 14:26:50.775800 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775811 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.775822 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.775833 | orchestrator | 2025-06-02 14:26:50.775844 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.775855 | orchestrator | Monday 02 June 2025 14:25:20 +0000 (0:00:00.298) 0:00:11.448 *********** 2025-06-02 14:26:50.775866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775876 | orchestrator | 2025-06-02 14:26:50.775887 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.775898 | orchestrator | Monday 02 June 2025 14:25:20 +0000 (0:00:00.150) 0:00:11.598 *********** 2025-06-02 14:26:50.775909 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.775919 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.775930 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.775941 | orchestrator | 2025-06-02 14:26:50.775952 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 14:26:50.775970 | orchestrator | Monday 02 June 2025 14:25:20 +0000 (0:00:00.320) 0:00:11.918 *********** 2025-06-02 14:26:50.775981 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:26:50.775991 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:26:50.776002 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:26:50.776013 | orchestrator | 2025-06-02 14:26:50.776024 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 14:26:50.776035 | orchestrator | Monday 02 June 2025 14:25:21 +0000 (0:00:00.537) 0:00:12.455 *********** 2025-06-02 14:26:50.776046 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776056 | orchestrator | 2025-06-02 14:26:50.776067 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 14:26:50.776078 | orchestrator | Monday 02 June 2025 14:25:21 +0000 (0:00:00.138) 0:00:12.594 *********** 2025-06-02 14:26:50.776089 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776105 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.776116 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.776127 | orchestrator | 2025-06-02 14:26:50.776137 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 14:26:50.776148 | orchestrator | Monday 02 June 2025 14:25:21 +0000 (0:00:00.295) 0:00:12.890 *********** 2025-06-02 14:26:50.776159 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:26:50.776170 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:26:50.776180 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:26:50.776191 | orchestrator | 2025-06-02 14:26:50.776202 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 14:26:50.776213 | orchestrator | Monday 02 June 2025 14:25:23 +0000 (0:00:01.592) 0:00:14.483 *********** 2025-06-02 14:26:50.776223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 14:26:50.776234 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 14:26:50.776245 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 14:26:50.776256 | orchestrator | 2025-06-02 14:26:50.776267 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 14:26:50.776277 | orchestrator | Monday 02 June 2025 14:25:25 +0000 (0:00:02.187) 0:00:16.670 *********** 2025-06-02 14:26:50.776288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 14:26:50.776299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 14:26:50.776310 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 14:26:50.776321 | orchestrator | 2025-06-02 14:26:50.776351 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 14:26:50.776364 | orchestrator | Monday 02 June 2025 14:25:27 +0000 (0:00:01.870) 0:00:18.541 *********** 2025-06-02 14:26:50.776382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 14:26:50.776393 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 14:26:50.776404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 14:26:50.776415 | orchestrator | 2025-06-02 14:26:50.776426 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 14:26:50.776437 | orchestrator | Monday 02 June 2025 14:25:28 +0000 (0:00:01.547) 0:00:20.088 *********** 2025-06-02 14:26:50.776447 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776458 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.776469 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.776480 | orchestrator | 2025-06-02 14:26:50.776491 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 14:26:50.776515 | orchestrator | Monday 02 June 2025 14:25:29 +0000 (0:00:00.302) 0:00:20.391 *********** 2025-06-02 14:26:50.776526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.776547 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.776558 | orchestrator | 2025-06-02 14:26:50.776569 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 14:26:50.776579 | orchestrator | Monday 02 June 2025 14:25:29 +0000 (0:00:00.324) 0:00:20.715 *********** 2025-06-02 14:26:50.776590 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:26:50.776601 | orchestrator | 2025-06-02 14:26:50.776612 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 14:26:50.776623 | orchestrator | Monday 02 June 2025 14:25:30 +0000 (0:00:00.915) 0:00:21.631 *********** 2025-06-02 14:26:50.776642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.776665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.776693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.776706 | orchestrator | 2025-06-02 14:26:50.776717 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 14:26:50.776728 | orchestrator | Monday 02 June 2025 14:25:31 +0000 (0:00:01.513) 0:00:23.145 *********** 2025-06-02 14:26:50.776750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776769 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.776825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776837 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.776848 | orchestrator | 2025-06-02 14:26:50.776859 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 14:26:50.776870 | orchestrator | Monday 02 June 2025 14:25:32 +0000 (0:00:00.654) 0:00:23.799 *********** 2025-06-02 14:26:50.776895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776914 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.776931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.776963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 14:26:50.776982 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.776993 | orchestrator | 2025-06-02 14:26:50.777004 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 14:26:50.777015 | orchestrator | Monday 02 June 2025 14:25:33 +0000 (0:00:01.065) 0:00:24.865 *********** 2025-06-02 14:26:50.777033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.777054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.777080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 14:26:50.777093 | orchestrator | 2025-06-02 14:26:50.777104 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 14:26:50.777115 | orchestrator | Monday 02 June 2025 14:25:34 +0000 (0:00:01.315) 0:00:26.181 *********** 2025-06-02 14:26:50.777126 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:26:50.777143 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:26:50.777154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:26:50.777165 | orchestrator | 2025-06-02 14:26:50.777176 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 14:26:50.777187 | orchestrator | Monday 02 June 2025 14:25:35 +0000 (0:00:00.301) 0:00:26.482 *********** 2025-06-02 14:26:50.777198 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:26:50.777209 | orchestrator | 2025-06-02 14:26:50.777220 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 14:26:50.777231 | orchestrator | Monday 02 June 2025 14:25:35 +0000 (0:00:00.698) 0:00:27.181 *********** 2025-06-02 14:26:50.777242 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:26:50.777253 | orchestrator | 2025-06-02 14:26:50.777269 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 14:26:50.777280 | orchestrator | Monday 02 June 2025 14:25:38 +0000 (0:00:02.122) 0:00:29.303 *********** 2025-06-02 14:26:50.777291 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:26:50.777302 | orchestrator | 2025-06-02 14:26:50.777313 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 14:26:50.777324 | orchestrator | Monday 02 June 2025 14:25:40 +0000 (0:00:02.008) 0:00:31.312 *********** 2025-06-02 14:26:50.777362 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:26:50.777374 | orchestrator | 2025-06-02 14:26:50.777385 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 14:26:50.777396 | orchestrator | Monday 02 June 2025 14:25:54 +0000 (0:00:14.567) 0:00:45.879 *********** 2025-06-02 14:26:50.777407 | orchestrator | 2025-06-02 14:26:50.777418 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 14:26:50.777429 | orchestrator | Monday 02 June 2025 14:25:54 +0000 (0:00:00.061) 0:00:45.941 *********** 2025-06-02 14:26:50.777439 | orchestrator | 2025-06-02 14:26:50.777450 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 14:26:50.777461 | orchestrator | Monday 02 June 2025 14:25:54 +0000 (0:00:00.062) 0:00:46.003 *********** 2025-06-02 14:26:50.777471 | orchestrator | 2025-06-02 14:26:50.777482 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 14:26:50.777493 | orchestrator | Monday 02 June 2025 14:25:54 +0000 (0:00:00.070) 0:00:46.073 *********** 2025-06-02 14:26:50.777504 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:26:50.777514 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:26:50.777525 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:26:50.777536 | orchestrator | 2025-06-02 14:26:50.777546 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:26:50.777558 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 14:26:50.777569 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 14:26:50.777579 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 14:26:50.777590 | orchestrator | 2025-06-02 14:26:50.777601 | orchestrator | 2025-06-02 14:26:50.777611 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:26:50.777622 | orchestrator | Monday 02 June 2025 14:26:49 +0000 (0:00:54.460) 0:01:40.533 *********** 2025-06-02 14:26:50.777632 | orchestrator | =============================================================================== 2025-06-02 14:26:50.777643 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.46s 2025-06-02 14:26:50.777654 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.57s 2025-06-02 14:26:50.777664 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.19s 2025-06-02 14:26:50.777685 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.12s 2025-06-02 14:26:50.777696 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.01s 2025-06-02 14:26:50.777707 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.87s 2025-06-02 14:26:50.777718 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.59s 2025-06-02 14:26:50.777728 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.55s 2025-06-02 14:26:50.777744 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.51s 2025-06-02 14:26:50.777755 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.32s 2025-06-02 14:26:50.777765 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2025-06-02 14:26:50.777776 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2025-06-02 14:26:50.777787 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.92s 2025-06-02 14:26:50.777797 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-06-02 14:26:50.777808 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-06-02 14:26:50.777818 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-06-02 14:26:50.777829 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-06-02 14:26:50.777840 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-06-02 14:26:50.777850 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-06-02 14:26:50.777861 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2025-06-02 14:26:50.777871 | orchestrator | 2025-06-02 14:26:50 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:50.777883 | orchestrator | 2025-06-02 14:26:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:53.815763 | orchestrator | 2025-06-02 14:26:53 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:53.816569 | orchestrator | 2025-06-02 14:26:53 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:53.816611 | orchestrator | 2025-06-02 14:26:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:56.864582 | orchestrator | 2025-06-02 14:26:56 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:56.867167 | orchestrator | 2025-06-02 14:26:56 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:56.867218 | orchestrator | 2025-06-02 14:26:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:26:59.917559 | orchestrator | 2025-06-02 14:26:59 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:26:59.919729 | orchestrator | 2025-06-02 14:26:59 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:26:59.919773 | orchestrator | 2025-06-02 14:26:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:02.977696 | orchestrator | 2025-06-02 14:27:02 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:27:02.978916 | orchestrator | 2025-06-02 14:27:02 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:02.978949 | orchestrator | 2025-06-02 14:27:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:06.037602 | orchestrator | 2025-06-02 14:27:06 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:27:06.038208 | orchestrator | 2025-06-02 14:27:06 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:06.038271 | orchestrator | 2025-06-02 14:27:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:09.076654 | orchestrator | 2025-06-02 14:27:09 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:27:09.077550 | orchestrator | 2025-06-02 14:27:09 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:09.077583 | orchestrator | 2025-06-02 14:27:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:12.115676 | orchestrator | 2025-06-02 14:27:12 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state STARTED 2025-06-02 14:27:12.117083 | orchestrator | 2025-06-02 14:27:12 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:12.117113 | orchestrator | 2025-06-02 14:27:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:15.213521 | orchestrator | 2025-06-02 14:27:15 | INFO  | Task fcf95634-b09e-402f-9c1c-d567753e5190 is in state SUCCESS 2025-06-02 14:27:15.214765 | orchestrator | 2025-06-02 14:27:15 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:15.214920 | orchestrator | 2025-06-02 14:27:15 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:15.216237 | orchestrator | 2025-06-02 14:27:15 | INFO  | Task 26bbdc6d-2b2f-4f43-aa2c-327bcab71f72 is in state STARTED 2025-06-02 14:27:15.217099 | orchestrator | 2025-06-02 14:27:15 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:15.217141 | orchestrator | 2025-06-02 14:27:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:18.249623 | orchestrator | 2025-06-02 14:27:18 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:18.249708 | orchestrator | 2025-06-02 14:27:18 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:18.249723 | orchestrator | 2025-06-02 14:27:18 | INFO  | Task 26bbdc6d-2b2f-4f43-aa2c-327bcab71f72 is in state SUCCESS 2025-06-02 14:27:18.249734 | orchestrator | 2025-06-02 14:27:18 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:18.249745 | orchestrator | 2025-06-02 14:27:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:21.290744 | orchestrator | 2025-06-02 14:27:21 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:21.291396 | orchestrator | 2025-06-02 14:27:21 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:21.292418 | orchestrator | 2025-06-02 14:27:21 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:21.294231 | orchestrator | 2025-06-02 14:27:21 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:21.296379 | orchestrator | 2025-06-02 14:27:21 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:21.296486 | orchestrator | 2025-06-02 14:27:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:24.329594 | orchestrator | 2025-06-02 14:27:24 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:24.331300 | orchestrator | 2025-06-02 14:27:24 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:24.332500 | orchestrator | 2025-06-02 14:27:24 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:24.335304 | orchestrator | 2025-06-02 14:27:24 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:24.335944 | orchestrator | 2025-06-02 14:27:24 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:24.338742 | orchestrator | 2025-06-02 14:27:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:27.390703 | orchestrator | 2025-06-02 14:27:27 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:27.391918 | orchestrator | 2025-06-02 14:27:27 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:27.393281 | orchestrator | 2025-06-02 14:27:27 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:27.395219 | orchestrator | 2025-06-02 14:27:27 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:27.396739 | orchestrator | 2025-06-02 14:27:27 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:27.396776 | orchestrator | 2025-06-02 14:27:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:30.439144 | orchestrator | 2025-06-02 14:27:30 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:30.440842 | orchestrator | 2025-06-02 14:27:30 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:30.443340 | orchestrator | 2025-06-02 14:27:30 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:30.443381 | orchestrator | 2025-06-02 14:27:30 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:30.443393 | orchestrator | 2025-06-02 14:27:30 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:30.443405 | orchestrator | 2025-06-02 14:27:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:33.492523 | orchestrator | 2025-06-02 14:27:33 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:33.494133 | orchestrator | 2025-06-02 14:27:33 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:33.495892 | orchestrator | 2025-06-02 14:27:33 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:33.498282 | orchestrator | 2025-06-02 14:27:33 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:33.500368 | orchestrator | 2025-06-02 14:27:33 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:33.500680 | orchestrator | 2025-06-02 14:27:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:36.541576 | orchestrator | 2025-06-02 14:27:36 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:36.543642 | orchestrator | 2025-06-02 14:27:36 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:36.545654 | orchestrator | 2025-06-02 14:27:36 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:36.547212 | orchestrator | 2025-06-02 14:27:36 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:36.549098 | orchestrator | 2025-06-02 14:27:36 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:36.549134 | orchestrator | 2025-06-02 14:27:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:39.592206 | orchestrator | 2025-06-02 14:27:39 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:39.594604 | orchestrator | 2025-06-02 14:27:39 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:39.598063 | orchestrator | 2025-06-02 14:27:39 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:39.601612 | orchestrator | 2025-06-02 14:27:39 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:39.605098 | orchestrator | 2025-06-02 14:27:39 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:39.605176 | orchestrator | 2025-06-02 14:27:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:42.667389 | orchestrator | 2025-06-02 14:27:42 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:42.668097 | orchestrator | 2025-06-02 14:27:42 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:42.669959 | orchestrator | 2025-06-02 14:27:42 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:42.669987 | orchestrator | 2025-06-02 14:27:42 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:42.670978 | orchestrator | 2025-06-02 14:27:42 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:42.671000 | orchestrator | 2025-06-02 14:27:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:45.703537 | orchestrator | 2025-06-02 14:27:45 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:45.703632 | orchestrator | 2025-06-02 14:27:45 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:45.705330 | orchestrator | 2025-06-02 14:27:45 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:45.707588 | orchestrator | 2025-06-02 14:27:45 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:45.707614 | orchestrator | 2025-06-02 14:27:45 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state STARTED 2025-06-02 14:27:45.707625 | orchestrator | 2025-06-02 14:27:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:48.736332 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:48.736428 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:48.736958 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:48.737467 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:27:48.738427 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:48.740211 | orchestrator | 2025-06-02 14:27:48 | INFO  | Task 16ccd430-1a87-482e-8ff8-7f871e99a42d is in state SUCCESS 2025-06-02 14:27:48.740433 | orchestrator | 2025-06-02 14:27:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:48.741293 | orchestrator | 2025-06-02 14:27:48.741325 | orchestrator | 2025-06-02 14:27:48.741337 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 14:27:48.741348 | orchestrator | 2025-06-02 14:27:48.741981 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 14:27:48.742008 | orchestrator | Monday 02 June 2025 14:26:20 +0000 (0:00:00.255) 0:00:00.255 *********** 2025-06-02 14:27:48.742067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 14:27:48.742080 | orchestrator | 2025-06-02 14:27:48.742105 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 14:27:48.742117 | orchestrator | Monday 02 June 2025 14:26:20 +0000 (0:00:00.210) 0:00:00.466 *********** 2025-06-02 14:27:48.742149 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 14:27:48.742160 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 14:27:48.742171 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 14:27:48.742182 | orchestrator | 2025-06-02 14:27:48.742193 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 14:27:48.742204 | orchestrator | Monday 02 June 2025 14:26:21 +0000 (0:00:01.137) 0:00:01.603 *********** 2025-06-02 14:27:48.742215 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 14:27:48.742225 | orchestrator | 2025-06-02 14:27:48.742236 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 14:27:48.742247 | orchestrator | Monday 02 June 2025 14:26:22 +0000 (0:00:01.122) 0:00:02.726 *********** 2025-06-02 14:27:48.742258 | orchestrator | changed: [testbed-manager] 2025-06-02 14:27:48.742268 | orchestrator | 2025-06-02 14:27:48.742279 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 14:27:48.742290 | orchestrator | Monday 02 June 2025 14:26:23 +0000 (0:00:01.147) 0:00:03.874 *********** 2025-06-02 14:27:48.742300 | orchestrator | changed: [testbed-manager] 2025-06-02 14:27:48.742311 | orchestrator | 2025-06-02 14:27:48.742322 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 14:27:48.742332 | orchestrator | Monday 02 June 2025 14:26:24 +0000 (0:00:00.944) 0:00:04.818 *********** 2025-06-02 14:27:48.742343 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 14:27:48.742353 | orchestrator | ok: [testbed-manager] 2025-06-02 14:27:48.742364 | orchestrator | 2025-06-02 14:27:48.742375 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 14:27:48.742386 | orchestrator | Monday 02 June 2025 14:27:02 +0000 (0:00:37.537) 0:00:42.355 *********** 2025-06-02 14:27:48.742397 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 14:27:48.742408 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 14:27:48.742419 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 14:27:48.742430 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 14:27:48.742440 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 14:27:48.742451 | orchestrator | 2025-06-02 14:27:48.742461 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 14:27:48.742472 | orchestrator | Monday 02 June 2025 14:27:06 +0000 (0:00:04.028) 0:00:46.384 *********** 2025-06-02 14:27:48.742482 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 14:27:48.742493 | orchestrator | 2025-06-02 14:27:48.742539 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 14:27:48.742557 | orchestrator | Monday 02 June 2025 14:27:06 +0000 (0:00:00.440) 0:00:46.824 *********** 2025-06-02 14:27:48.742568 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:27:48.742578 | orchestrator | 2025-06-02 14:27:48.742589 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 14:27:48.742602 | orchestrator | Monday 02 June 2025 14:27:07 +0000 (0:00:00.132) 0:00:46.957 *********** 2025-06-02 14:27:48.742615 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:27:48.742627 | orchestrator | 2025-06-02 14:27:48.742639 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 14:27:48.742651 | orchestrator | Monday 02 June 2025 14:27:07 +0000 (0:00:00.270) 0:00:47.227 *********** 2025-06-02 14:27:48.742664 | orchestrator | changed: [testbed-manager] 2025-06-02 14:27:48.742676 | orchestrator | 2025-06-02 14:27:48.742688 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 14:27:48.742701 | orchestrator | Monday 02 June 2025 14:27:08 +0000 (0:00:01.408) 0:00:48.636 *********** 2025-06-02 14:27:48.742713 | orchestrator | changed: [testbed-manager] 2025-06-02 14:27:48.742726 | orchestrator | 2025-06-02 14:27:48.742745 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 14:27:48.742757 | orchestrator | Monday 02 June 2025 14:27:09 +0000 (0:00:00.951) 0:00:49.587 *********** 2025-06-02 14:27:48.742770 | orchestrator | changed: [testbed-manager] 2025-06-02 14:27:48.742782 | orchestrator | 2025-06-02 14:27:48.742795 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 14:27:48.742808 | orchestrator | Monday 02 June 2025 14:27:10 +0000 (0:00:00.621) 0:00:50.208 *********** 2025-06-02 14:27:48.742826 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 14:27:48.742842 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 14:27:48.742854 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 14:27:48.742865 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 14:27:48.742875 | orchestrator | 2025-06-02 14:27:48.742886 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:27:48.742897 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 14:27:48.742909 | orchestrator | 2025-06-02 14:27:48.742920 | orchestrator | 2025-06-02 14:27:48.742974 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:27:48.742987 | orchestrator | Monday 02 June 2025 14:27:11 +0000 (0:00:01.492) 0:00:51.700 *********** 2025-06-02 14:27:48.742998 | orchestrator | =============================================================================== 2025-06-02 14:27:48.743009 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.54s 2025-06-02 14:27:48.743020 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.03s 2025-06-02 14:27:48.743031 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2025-06-02 14:27:48.743049 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.41s 2025-06-02 14:27:48.743060 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.15s 2025-06-02 14:27:48.743071 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2025-06-02 14:27:48.743082 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2025-06-02 14:27:48.743093 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.95s 2025-06-02 14:27:48.743104 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2025-06-02 14:27:48.743115 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2025-06-02 14:27:48.743126 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-06-02 14:27:48.743137 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-06-02 14:27:48.743148 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-06-02 14:27:48.743159 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-06-02 14:27:48.743170 | orchestrator | 2025-06-02 14:27:48.743181 | orchestrator | 2025-06-02 14:27:48.743192 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:27:48.743203 | orchestrator | 2025-06-02 14:27:48.743214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:27:48.743226 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.159) 0:00:00.160 *********** 2025-06-02 14:27:48.743237 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.743248 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.743259 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.743270 | orchestrator | 2025-06-02 14:27:48.743281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:27:48.743292 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.277) 0:00:00.438 *********** 2025-06-02 14:27:48.743303 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 14:27:48.743314 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 14:27:48.743332 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 14:27:48.743344 | orchestrator | 2025-06-02 14:27:48.743355 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 14:27:48.743366 | orchestrator | 2025-06-02 14:27:48.743377 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 14:27:48.743388 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.548) 0:00:00.986 *********** 2025-06-02 14:27:48.743399 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.743410 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.743421 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.743432 | orchestrator | 2025-06-02 14:27:48.743443 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:27:48.743455 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:27:48.743466 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:27:48.743477 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:27:48.743488 | orchestrator | 2025-06-02 14:27:48.743499 | orchestrator | 2025-06-02 14:27:48.743527 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:27:48.743538 | orchestrator | Monday 02 June 2025 14:27:17 +0000 (0:00:00.718) 0:00:01.705 *********** 2025-06-02 14:27:48.743549 | orchestrator | =============================================================================== 2025-06-02 14:27:48.743560 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-06-02 14:27:48.743571 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-06-02 14:27:48.743582 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-06-02 14:27:48.743592 | orchestrator | 2025-06-02 14:27:48.743603 | orchestrator | 2025-06-02 14:27:48.743614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:27:48.743625 | orchestrator | 2025-06-02 14:27:48.743636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:27:48.743647 | orchestrator | Monday 02 June 2025 14:25:08 +0000 (0:00:00.247) 0:00:00.248 *********** 2025-06-02 14:27:48.743658 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.743669 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.743679 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.743690 | orchestrator | 2025-06-02 14:27:48.743701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:27:48.743712 | orchestrator | Monday 02 June 2025 14:25:09 +0000 (0:00:00.323) 0:00:00.571 *********** 2025-06-02 14:27:48.743723 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 14:27:48.743734 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 14:27:48.743745 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 14:27:48.743756 | orchestrator | 2025-06-02 14:27:48.743767 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 14:27:48.743778 | orchestrator | 2025-06-02 14:27:48.743817 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.743830 | orchestrator | Monday 02 June 2025 14:25:09 +0000 (0:00:00.429) 0:00:01.001 *********** 2025-06-02 14:27:48.743841 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:27:48.743852 | orchestrator | 2025-06-02 14:27:48.743863 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 14:27:48.743874 | orchestrator | Monday 02 June 2025 14:25:10 +0000 (0:00:00.545) 0:00:01.546 *********** 2025-06-02 14:27:48.743895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.743922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.743935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.743948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.743992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744059 | orchestrator | 2025-06-02 14:27:48.744070 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 14:27:48.744081 | orchestrator | Monday 02 June 2025 14:25:12 +0000 (0:00:01.807) 0:00:03.353 *********** 2025-06-02 14:27:48.744093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 14:27:48.744104 | orchestrator | 2025-06-02 14:27:48.744115 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 14:27:48.744126 | orchestrator | Monday 02 June 2025 14:25:12 +0000 (0:00:00.821) 0:00:04.174 *********** 2025-06-02 14:27:48.744136 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.744148 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.744158 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.744169 | orchestrator | 2025-06-02 14:27:48.744180 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 14:27:48.744191 | orchestrator | Monday 02 June 2025 14:25:13 +0000 (0:00:00.491) 0:00:04.666 *********** 2025-06-02 14:27:48.744202 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:27:48.744213 | orchestrator | 2025-06-02 14:27:48.744224 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.744235 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.690) 0:00:05.356 *********** 2025-06-02 14:27:48.744251 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:27:48.744263 | orchestrator | 2025-06-02 14:27:48.744279 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 14:27:48.744290 | orchestrator | Monday 02 June 2025 14:25:14 +0000 (0:00:00.551) 0:00:05.907 *********** 2025-06-02 14:27:48.744307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.744320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.744334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.744357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.744486 | orchestrator | 2025-06-02 14:27:48.744497 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 14:27:48.744557 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:03.482) 0:00:09.390 *********** 2025-06-02 14:27:48.744571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744635 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.744647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744688 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.744708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744748 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.744760 | orchestrator | 2025-06-02 14:27:48.744771 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 14:27:48.744782 | orchestrator | Monday 02 June 2025 14:25:18 +0000 (0:00:00.654) 0:00:10.045 *********** 2025-06-02 14:27:48.744794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744835 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.744859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744895 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.744907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 14:27:48.744924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.744942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 14:27:48.744954 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.744965 | orchestrator | 2025-06-02 14:27:48.744976 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 14:27:48.744991 | orchestrator | Monday 02 June 2025 14:25:19 +0000 (0:00:00.873) 0:00:10.918 *********** 2025-06-02 14:27:48.745004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745134 | orchestrator | 2025-06-02 14:27:48.745145 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 14:27:48.745157 | orchestrator | Monday 02 June 2025 14:25:23 +0000 (0:00:03.610) 0:00:14.529 *********** 2025-06-02 14:27:48.745175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745303 | orchestrator | 2025-06-02 14:27:48.745314 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 14:27:48.745332 | orchestrator | Monday 02 June 2025 14:25:28 +0000 (0:00:04.980) 0:00:19.510 *********** 2025-06-02 14:27:48.745343 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:27:48.745354 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.745365 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:27:48.745376 | orchestrator | 2025-06-02 14:27:48.745387 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 14:27:48.745398 | orchestrator | Monday 02 June 2025 14:25:29 +0000 (0:00:01.382) 0:00:20.892 *********** 2025-06-02 14:27:48.745409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.745420 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.745431 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.745442 | orchestrator | 2025-06-02 14:27:48.745452 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 14:27:48.745463 | orchestrator | Monday 02 June 2025 14:25:30 +0000 (0:00:00.548) 0:00:21.442 *********** 2025-06-02 14:27:48.745474 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.745485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.745496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.745523 | orchestrator | 2025-06-02 14:27:48.745535 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 14:27:48.745547 | orchestrator | Monday 02 June 2025 14:25:30 +0000 (0:00:00.527) 0:00:21.969 *********** 2025-06-02 14:27:48.745558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.745568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.745579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.745590 | orchestrator | 2025-06-02 14:27:48.745601 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 14:27:48.745612 | orchestrator | Monday 02 June 2025 14:25:31 +0000 (0:00:00.387) 0:00:22.357 *********** 2025-06-02 14:27:48.745624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.745707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 14:27:48.745725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.745771 | orchestrator | 2025-06-02 14:27:48.745782 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.745794 | orchestrator | Monday 02 June 2025 14:25:33 +0000 (0:00:02.234) 0:00:24.591 *********** 2025-06-02 14:27:48.745805 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.745816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.745827 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.745837 | orchestrator | 2025-06-02 14:27:48.745848 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 14:27:48.745859 | orchestrator | Monday 02 June 2025 14:25:33 +0000 (0:00:00.305) 0:00:24.897 *********** 2025-06-02 14:27:48.745870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 14:27:48.745881 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 14:27:48.745892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 14:27:48.745903 | orchestrator | 2025-06-02 14:27:48.745914 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 14:27:48.745925 | orchestrator | Monday 02 June 2025 14:25:35 +0000 (0:00:02.245) 0:00:27.142 *********** 2025-06-02 14:27:48.745935 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:27:48.745946 | orchestrator | 2025-06-02 14:27:48.745957 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 14:27:48.745968 | orchestrator | Monday 02 June 2025 14:25:36 +0000 (0:00:00.925) 0:00:28.067 *********** 2025-06-02 14:27:48.745979 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.745989 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.746000 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.746011 | orchestrator | 2025-06-02 14:27:48.746055 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 14:27:48.746067 | orchestrator | Monday 02 June 2025 14:25:37 +0000 (0:00:00.551) 0:00:28.618 *********** 2025-06-02 14:27:48.746077 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:27:48.746088 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 14:27:48.746099 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 14:27:48.746110 | orchestrator | 2025-06-02 14:27:48.746121 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 14:27:48.746132 | orchestrator | Monday 02 June 2025 14:25:38 +0000 (0:00:01.028) 0:00:29.646 *********** 2025-06-02 14:27:48.746143 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.746154 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.746165 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.746176 | orchestrator | 2025-06-02 14:27:48.746187 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 14:27:48.746198 | orchestrator | Monday 02 June 2025 14:25:38 +0000 (0:00:00.315) 0:00:29.962 *********** 2025-06-02 14:27:48.746209 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 14:27:48.746220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 14:27:48.746230 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 14:27:48.746241 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 14:27:48.746258 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 14:27:48.746276 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 14:27:48.746288 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 14:27:48.746299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 14:27:48.746310 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 14:27:48.746321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 14:27:48.746332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 14:27:48.746343 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 14:27:48.746354 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 14:27:48.746365 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 14:27:48.746375 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 14:27:48.746386 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 14:27:48.746397 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 14:27:48.746408 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 14:27:48.746419 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 14:27:48.746430 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 14:27:48.746441 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 14:27:48.746452 | orchestrator | 2025-06-02 14:27:48.746463 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 14:27:48.746473 | orchestrator | Monday 02 June 2025 14:25:47 +0000 (0:00:08.539) 0:00:38.501 *********** 2025-06-02 14:27:48.746484 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 14:27:48.746495 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 14:27:48.746551 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 14:27:48.746565 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 14:27:48.746576 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 14:27:48.746586 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 14:27:48.746596 | orchestrator | 2025-06-02 14:27:48.746605 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 14:27:48.746615 | orchestrator | Monday 02 June 2025 14:25:49 +0000 (0:00:02.579) 0:00:41.081 *********** 2025-06-02 14:27:48.746713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.746752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.746768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 14:27:48.746779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 14:27:48.746857 | orchestrator | 2025-06-02 14:27:48.746867 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.746877 | orchestrator | Monday 02 June 2025 14:25:51 +0000 (0:00:02.157) 0:00:43.238 *********** 2025-06-02 14:27:48.746887 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.746897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.746907 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.746916 | orchestrator | 2025-06-02 14:27:48.746926 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 14:27:48.746936 | orchestrator | Monday 02 June 2025 14:25:52 +0000 (0:00:00.300) 0:00:43.539 *********** 2025-06-02 14:27:48.746945 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.746955 | orchestrator | 2025-06-02 14:27:48.746965 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 14:27:48.746975 | orchestrator | Monday 02 June 2025 14:25:54 +0000 (0:00:02.061) 0:00:45.601 *********** 2025-06-02 14:27:48.746984 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.746994 | orchestrator | 2025-06-02 14:27:48.747004 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 14:27:48.747013 | orchestrator | Monday 02 June 2025 14:25:56 +0000 (0:00:02.391) 0:00:47.993 *********** 2025-06-02 14:27:48.747023 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.747033 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.747042 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.747052 | orchestrator | 2025-06-02 14:27:48.747062 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 14:27:48.747071 | orchestrator | Monday 02 June 2025 14:25:57 +0000 (0:00:00.837) 0:00:48.831 *********** 2025-06-02 14:27:48.747086 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.747096 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.747106 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.747115 | orchestrator | 2025-06-02 14:27:48.747125 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 14:27:48.747135 | orchestrator | Monday 02 June 2025 14:25:57 +0000 (0:00:00.386) 0:00:49.217 *********** 2025-06-02 14:27:48.747144 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.747154 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.747163 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.747173 | orchestrator | 2025-06-02 14:27:48.747183 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 14:27:48.747192 | orchestrator | Monday 02 June 2025 14:25:58 +0000 (0:00:00.349) 0:00:49.566 *********** 2025-06-02 14:27:48.747202 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747211 | orchestrator | 2025-06-02 14:27:48.747221 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 14:27:48.747230 | orchestrator | Monday 02 June 2025 14:26:11 +0000 (0:00:13.345) 0:01:02.911 *********** 2025-06-02 14:27:48.747240 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747250 | orchestrator | 2025-06-02 14:27:48.747260 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 14:27:48.747269 | orchestrator | Monday 02 June 2025 14:26:20 +0000 (0:00:09.113) 0:01:12.025 *********** 2025-06-02 14:27:48.747279 | orchestrator | 2025-06-02 14:27:48.747289 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 14:27:48.747298 | orchestrator | Monday 02 June 2025 14:26:21 +0000 (0:00:00.279) 0:01:12.304 *********** 2025-06-02 14:27:48.747308 | orchestrator | 2025-06-02 14:27:48.747317 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 14:27:48.747327 | orchestrator | Monday 02 June 2025 14:26:21 +0000 (0:00:00.064) 0:01:12.369 *********** 2025-06-02 14:27:48.747336 | orchestrator | 2025-06-02 14:27:48.747346 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 14:27:48.747356 | orchestrator | Monday 02 June 2025 14:26:21 +0000 (0:00:00.059) 0:01:12.429 *********** 2025-06-02 14:27:48.747365 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747375 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:27:48.747384 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:27:48.747394 | orchestrator | 2025-06-02 14:27:48.747403 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 14:27:48.747413 | orchestrator | Monday 02 June 2025 14:26:45 +0000 (0:00:23.929) 0:01:36.358 *********** 2025-06-02 14:27:48.747423 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747432 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:27:48.747442 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:27:48.747451 | orchestrator | 2025-06-02 14:27:48.747461 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 14:27:48.747471 | orchestrator | Monday 02 June 2025 14:26:54 +0000 (0:00:09.844) 0:01:46.202 *********** 2025-06-02 14:27:48.747480 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747490 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:27:48.747518 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:27:48.747529 | orchestrator | 2025-06-02 14:27:48.747539 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.747549 | orchestrator | Monday 02 June 2025 14:27:01 +0000 (0:00:06.283) 0:01:52.486 *********** 2025-06-02 14:27:48.747559 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:27:48.747568 | orchestrator | 2025-06-02 14:27:48.747578 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 14:27:48.747592 | orchestrator | Monday 02 June 2025 14:27:01 +0000 (0:00:00.742) 0:01:53.229 *********** 2025-06-02 14:27:48.747602 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:27:48.747611 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.747627 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:27:48.747636 | orchestrator | 2025-06-02 14:27:48.747646 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 14:27:48.747656 | orchestrator | Monday 02 June 2025 14:27:02 +0000 (0:00:00.755) 0:01:53.985 *********** 2025-06-02 14:27:48.747665 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:27:48.747675 | orchestrator | 2025-06-02 14:27:48.747685 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 14:27:48.747694 | orchestrator | Monday 02 June 2025 14:27:04 +0000 (0:00:01.813) 0:01:55.798 *********** 2025-06-02 14:27:48.747704 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 14:27:48.747714 | orchestrator | 2025-06-02 14:27:48.747723 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 14:27:48.747733 | orchestrator | Monday 02 June 2025 14:27:14 +0000 (0:00:10.068) 0:02:05.866 *********** 2025-06-02 14:27:48.747743 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 14:27:48.747753 | orchestrator | 2025-06-02 14:27:48.747763 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 14:27:48.747772 | orchestrator | Monday 02 June 2025 14:27:34 +0000 (0:00:20.059) 0:02:25.926 *********** 2025-06-02 14:27:48.747782 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 14:27:48.747792 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 14:27:48.747801 | orchestrator | 2025-06-02 14:27:48.747811 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 14:27:48.747821 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:05.339) 0:02:31.266 *********** 2025-06-02 14:27:48.747831 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.747840 | orchestrator | 2025-06-02 14:27:48.747850 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 14:27:48.747860 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:00.381) 0:02:31.647 *********** 2025-06-02 14:27:48.747869 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.747879 | orchestrator | 2025-06-02 14:27:48.747889 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 14:27:48.747898 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:00.130) 0:02:31.778 *********** 2025-06-02 14:27:48.747908 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.747917 | orchestrator | 2025-06-02 14:27:48.747927 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 14:27:48.747937 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:00.127) 0:02:31.905 *********** 2025-06-02 14:27:48.747946 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.747956 | orchestrator | 2025-06-02 14:27:48.747966 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 14:27:48.747975 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:00.310) 0:02:32.215 *********** 2025-06-02 14:27:48.747985 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:27:48.747995 | orchestrator | 2025-06-02 14:27:48.748004 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 14:27:48.748014 | orchestrator | Monday 02 June 2025 14:27:43 +0000 (0:00:02.913) 0:02:35.129 *********** 2025-06-02 14:27:48.748024 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:27:48.748033 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:27:48.748043 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:27:48.748053 | orchestrator | 2025-06-02 14:27:48.748063 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:27:48.748073 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 14:27:48.748083 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 14:27:48.748107 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 14:27:48.748117 | orchestrator | 2025-06-02 14:27:48.748127 | orchestrator | 2025-06-02 14:27:48.748137 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:27:48.748147 | orchestrator | Monday 02 June 2025 14:27:45 +0000 (0:00:01.479) 0:02:36.608 *********** 2025-06-02 14:27:48.748156 | orchestrator | =============================================================================== 2025-06-02 14:27:48.748166 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.93s 2025-06-02 14:27:48.748176 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.06s 2025-06-02 14:27:48.748185 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.35s 2025-06-02 14:27:48.748198 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.07s 2025-06-02 14:27:48.748214 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.84s 2025-06-02 14:27:48.748239 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.11s 2025-06-02 14:27:48.748255 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.54s 2025-06-02 14:27:48.748270 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.28s 2025-06-02 14:27:48.748285 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.34s 2025-06-02 14:27:48.748301 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.98s 2025-06-02 14:27:48.748323 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.61s 2025-06-02 14:27:48.748337 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2025-06-02 14:27:48.748351 | orchestrator | keystone : Creating default user role ----------------------------------- 2.91s 2025-06-02 14:27:48.748366 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.58s 2025-06-02 14:27:48.748382 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.39s 2025-06-02 14:27:48.748397 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.25s 2025-06-02 14:27:48.748412 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.23s 2025-06-02 14:27:48.748427 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.16s 2025-06-02 14:27:48.748443 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.06s 2025-06-02 14:27:48.748459 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.81s 2025-06-02 14:27:51.771727 | orchestrator | 2025-06-02 14:27:51 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:51.774739 | orchestrator | 2025-06-02 14:27:51 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:51.775154 | orchestrator | 2025-06-02 14:27:51 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:51.776044 | orchestrator | 2025-06-02 14:27:51 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:27:51.776353 | orchestrator | 2025-06-02 14:27:51 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:51.778841 | orchestrator | 2025-06-02 14:27:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:54.807604 | orchestrator | 2025-06-02 14:27:54 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:54.807961 | orchestrator | 2025-06-02 14:27:54 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:54.808794 | orchestrator | 2025-06-02 14:27:54 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:54.810152 | orchestrator | 2025-06-02 14:27:54 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:27:54.810847 | orchestrator | 2025-06-02 14:27:54 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:54.810954 | orchestrator | 2025-06-02 14:27:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:27:57.850154 | orchestrator | 2025-06-02 14:27:57 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state STARTED 2025-06-02 14:27:57.853119 | orchestrator | 2025-06-02 14:27:57 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:27:57.855771 | orchestrator | 2025-06-02 14:27:57 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:27:57.858184 | orchestrator | 2025-06-02 14:27:57 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:27:57.860577 | orchestrator | 2025-06-02 14:27:57 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:27:57.860632 | orchestrator | 2025-06-02 14:27:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:00.888794 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task dffc25eb-79ef-4859-a664-10ba3a7d9e3a is in state SUCCESS 2025-06-02 14:28:00.889476 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:00.890718 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:00.891862 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:00.893174 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:00.893685 | orchestrator | 2025-06-02 14:28:00 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:00.893709 | orchestrator | 2025-06-02 14:28:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:03.927929 | orchestrator | 2025-06-02 14:28:03 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:03.928849 | orchestrator | 2025-06-02 14:28:03 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:03.930629 | orchestrator | 2025-06-02 14:28:03 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:03.932041 | orchestrator | 2025-06-02 14:28:03 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:03.933463 | orchestrator | 2025-06-02 14:28:03 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:03.933583 | orchestrator | 2025-06-02 14:28:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:06.967268 | orchestrator | 2025-06-02 14:28:06 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:06.968452 | orchestrator | 2025-06-02 14:28:06 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:06.969203 | orchestrator | 2025-06-02 14:28:06 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:06.971068 | orchestrator | 2025-06-02 14:28:06 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:06.971704 | orchestrator | 2025-06-02 14:28:06 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:06.972128 | orchestrator | 2025-06-02 14:28:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:10.002648 | orchestrator | 2025-06-02 14:28:09 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:10.003484 | orchestrator | 2025-06-02 14:28:10 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:10.005468 | orchestrator | 2025-06-02 14:28:10 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:10.006463 | orchestrator | 2025-06-02 14:28:10 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:10.007913 | orchestrator | 2025-06-02 14:28:10 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:10.008065 | orchestrator | 2025-06-02 14:28:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:13.036237 | orchestrator | 2025-06-02 14:28:13 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:13.038368 | orchestrator | 2025-06-02 14:28:13 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:13.041629 | orchestrator | 2025-06-02 14:28:13 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:13.045121 | orchestrator | 2025-06-02 14:28:13 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:13.048775 | orchestrator | 2025-06-02 14:28:13 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:13.049484 | orchestrator | 2025-06-02 14:28:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:16.078366 | orchestrator | 2025-06-02 14:28:16 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:16.078650 | orchestrator | 2025-06-02 14:28:16 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:16.081216 | orchestrator | 2025-06-02 14:28:16 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:16.082441 | orchestrator | 2025-06-02 14:28:16 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:16.085817 | orchestrator | 2025-06-02 14:28:16 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:16.085840 | orchestrator | 2025-06-02 14:28:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:19.115325 | orchestrator | 2025-06-02 14:28:19 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:19.115485 | orchestrator | 2025-06-02 14:28:19 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:19.120154 | orchestrator | 2025-06-02 14:28:19 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:19.120188 | orchestrator | 2025-06-02 14:28:19 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:19.120196 | orchestrator | 2025-06-02 14:28:19 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:19.120205 | orchestrator | 2025-06-02 14:28:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:22.163186 | orchestrator | 2025-06-02 14:28:22 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:22.164207 | orchestrator | 2025-06-02 14:28:22 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:22.164906 | orchestrator | 2025-06-02 14:28:22 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:22.168359 | orchestrator | 2025-06-02 14:28:22 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:22.169107 | orchestrator | 2025-06-02 14:28:22 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:22.169132 | orchestrator | 2025-06-02 14:28:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:25.211497 | orchestrator | 2025-06-02 14:28:25 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:25.214368 | orchestrator | 2025-06-02 14:28:25 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:25.216817 | orchestrator | 2025-06-02 14:28:25 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:25.218971 | orchestrator | 2025-06-02 14:28:25 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:25.221299 | orchestrator | 2025-06-02 14:28:25 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:25.221581 | orchestrator | 2025-06-02 14:28:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:28.263442 | orchestrator | 2025-06-02 14:28:28 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:28.263703 | orchestrator | 2025-06-02 14:28:28 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state STARTED 2025-06-02 14:28:28.264431 | orchestrator | 2025-06-02 14:28:28 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:28.265086 | orchestrator | 2025-06-02 14:28:28 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:28.265792 | orchestrator | 2025-06-02 14:28:28 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:28.265814 | orchestrator | 2025-06-02 14:28:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:31.296147 | orchestrator | 2025-06-02 14:28:31 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:31.296375 | orchestrator | 2025-06-02 14:28:31 | INFO  | Task 8f44558e-e784-42bb-ae8c-da677e6fe808 is in state SUCCESS 2025-06-02 14:28:31.296892 | orchestrator | 2025-06-02 14:28:31.296921 | orchestrator | 2025-06-02 14:28:31.296933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:28:31.296946 | orchestrator | 2025-06-02 14:28:31.296958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:28:31.296970 | orchestrator | Monday 02 June 2025 14:27:22 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-06-02 14:28:31.296982 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:28:31.296994 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:28:31.297006 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:28:31.297017 | orchestrator | ok: [testbed-manager] 2025-06-02 14:28:31.297029 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:28:31.297040 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:28:31.297052 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:28:31.297063 | orchestrator | 2025-06-02 14:28:31.297075 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:28:31.297087 | orchestrator | Monday 02 June 2025 14:27:23 +0000 (0:00:00.937) 0:00:01.204 *********** 2025-06-02 14:28:31.297098 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297110 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297122 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297134 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297145 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297157 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297168 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 14:28:31.297180 | orchestrator | 2025-06-02 14:28:31.297192 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 14:28:31.297225 | orchestrator | 2025-06-02 14:28:31.297237 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 14:28:31.297249 | orchestrator | Monday 02 June 2025 14:27:24 +0000 (0:00:00.793) 0:00:01.998 *********** 2025-06-02 14:28:31.297261 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:28:31.297274 | orchestrator | 2025-06-02 14:28:31.297286 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 14:28:31.297298 | orchestrator | Monday 02 June 2025 14:27:27 +0000 (0:00:02.437) 0:00:04.435 *********** 2025-06-02 14:28:31.297310 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 14:28:31.297321 | orchestrator | 2025-06-02 14:28:31.297333 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 14:28:31.297344 | orchestrator | Monday 02 June 2025 14:27:35 +0000 (0:00:08.744) 0:00:13.180 *********** 2025-06-02 14:28:31.297356 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 14:28:31.297380 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 14:28:31.297392 | orchestrator | 2025-06-02 14:28:31.297404 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 14:28:31.297415 | orchestrator | Monday 02 June 2025 14:27:41 +0000 (0:00:05.421) 0:00:18.601 *********** 2025-06-02 14:28:31.297427 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 14:28:31.297439 | orchestrator | 2025-06-02 14:28:31.297450 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 14:28:31.297462 | orchestrator | Monday 02 June 2025 14:27:44 +0000 (0:00:03.035) 0:00:21.637 *********** 2025-06-02 14:28:31.297473 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 14:28:31.297485 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 14:28:31.297498 | orchestrator | 2025-06-02 14:28:31.297512 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 14:28:31.297525 | orchestrator | Monday 02 June 2025 14:27:48 +0000 (0:00:03.754) 0:00:25.391 *********** 2025-06-02 14:28:31.297538 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 14:28:31.297552 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 14:28:31.297565 | orchestrator | 2025-06-02 14:28:31.297579 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 14:28:31.297592 | orchestrator | Monday 02 June 2025 14:27:53 +0000 (0:00:05.889) 0:00:31.281 *********** 2025-06-02 14:28:31.297603 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 14:28:31.297615 | orchestrator | 2025-06-02 14:28:31.297646 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:28:31.297658 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297669 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297680 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297779 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297793 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297816 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297837 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.297848 | orchestrator | 2025-06-02 14:28:31.297859 | orchestrator | 2025-06-02 14:28:31.297870 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:28:31.297881 | orchestrator | Monday 02 June 2025 14:27:58 +0000 (0:00:04.098) 0:00:35.380 *********** 2025-06-02 14:28:31.297892 | orchestrator | =============================================================================== 2025-06-02 14:28:31.297903 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 8.74s 2025-06-02 14:28:31.297913 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.89s 2025-06-02 14:28:31.297924 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.42s 2025-06-02 14:28:31.297935 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.10s 2025-06-02 14:28:31.297946 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.75s 2025-06-02 14:28:31.297956 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.04s 2025-06-02 14:28:31.297967 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.44s 2025-06-02 14:28:31.297978 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-06-02 14:28:31.297989 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-06-02 14:28:31.298000 | orchestrator | 2025-06-02 14:28:31.298011 | orchestrator | 2025-06-02 14:28:31.298073 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 14:28:31.298092 | orchestrator | 2025-06-02 14:28:31.298111 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 14:28:31.298129 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.196) 0:00:00.196 *********** 2025-06-02 14:28:31.298146 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298164 | orchestrator | 2025-06-02 14:28:31.298183 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 14:28:31.298202 | orchestrator | Monday 02 June 2025 14:27:17 +0000 (0:00:01.890) 0:00:02.087 *********** 2025-06-02 14:28:31.298217 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298228 | orchestrator | 2025-06-02 14:28:31.298238 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 14:28:31.298249 | orchestrator | Monday 02 June 2025 14:27:18 +0000 (0:00:00.911) 0:00:02.998 *********** 2025-06-02 14:28:31.298259 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298270 | orchestrator | 2025-06-02 14:28:31.298281 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 14:28:31.298292 | orchestrator | Monday 02 June 2025 14:27:19 +0000 (0:00:00.990) 0:00:03.989 *********** 2025-06-02 14:28:31.298302 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298313 | orchestrator | 2025-06-02 14:28:31.298323 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 14:28:31.298341 | orchestrator | Monday 02 June 2025 14:27:21 +0000 (0:00:01.324) 0:00:05.313 *********** 2025-06-02 14:28:31.298352 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298363 | orchestrator | 2025-06-02 14:28:31.298373 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 14:28:31.298384 | orchestrator | Monday 02 June 2025 14:27:22 +0000 (0:00:00.920) 0:00:06.234 *********** 2025-06-02 14:28:31.298394 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298406 | orchestrator | 2025-06-02 14:28:31.298418 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 14:28:31.298430 | orchestrator | Monday 02 June 2025 14:27:22 +0000 (0:00:00.811) 0:00:07.046 *********** 2025-06-02 14:28:31.298443 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298455 | orchestrator | 2025-06-02 14:28:31.298467 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 14:28:31.298487 | orchestrator | Monday 02 June 2025 14:27:24 +0000 (0:00:01.103) 0:00:08.149 *********** 2025-06-02 14:28:31.298500 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298511 | orchestrator | 2025-06-02 14:28:31.298521 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 14:28:31.298532 | orchestrator | Monday 02 June 2025 14:27:25 +0000 (0:00:01.019) 0:00:09.169 *********** 2025-06-02 14:28:31.298543 | orchestrator | changed: [testbed-manager] 2025-06-02 14:28:31.298553 | orchestrator | 2025-06-02 14:28:31.298564 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 14:28:31.298574 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:40.810) 0:00:49.979 *********** 2025-06-02 14:28:31.298585 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:28:31.298595 | orchestrator | 2025-06-02 14:28:31.298606 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 14:28:31.298616 | orchestrator | 2025-06-02 14:28:31.298668 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 14:28:31.298683 | orchestrator | Monday 02 June 2025 14:28:06 +0000 (0:00:00.142) 0:00:50.122 *********** 2025-06-02 14:28:31.298694 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:28:31.298705 | orchestrator | 2025-06-02 14:28:31.298715 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 14:28:31.298726 | orchestrator | 2025-06-02 14:28:31.298736 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 14:28:31.298747 | orchestrator | Monday 02 June 2025 14:28:17 +0000 (0:00:11.467) 0:01:01.589 *********** 2025-06-02 14:28:31.298758 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:28:31.298768 | orchestrator | 2025-06-02 14:28:31.298779 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 14:28:31.298789 | orchestrator | 2025-06-02 14:28:31.298800 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 14:28:31.298811 | orchestrator | Monday 02 June 2025 14:28:18 +0000 (0:00:01.173) 0:01:02.762 *********** 2025-06-02 14:28:31.298821 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:28:31.298832 | orchestrator | 2025-06-02 14:28:31.298852 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:28:31.298864 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 14:28:31.298875 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.298886 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.298897 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 14:28:31.298908 | orchestrator | 2025-06-02 14:28:31.298919 | orchestrator | 2025-06-02 14:28:31.298930 | orchestrator | 2025-06-02 14:28:31.298941 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:28:31.298952 | orchestrator | Monday 02 June 2025 14:28:29 +0000 (0:00:11.037) 0:01:13.799 *********** 2025-06-02 14:28:31.298962 | orchestrator | =============================================================================== 2025-06-02 14:28:31.298973 | orchestrator | Create admin user ------------------------------------------------------ 40.81s 2025-06-02 14:28:31.298984 | orchestrator | Restart ceph manager service ------------------------------------------- 23.68s 2025-06-02 14:28:31.298994 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.89s 2025-06-02 14:28:31.299005 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.32s 2025-06-02 14:28:31.299015 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.10s 2025-06-02 14:28:31.299026 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.02s 2025-06-02 14:28:31.299044 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.99s 2025-06-02 14:28:31.299055 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2025-06-02 14:28:31.299065 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-06-02 14:28:31.299076 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.81s 2025-06-02 14:28:31.299087 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-06-02 14:28:31.299097 | orchestrator | 2025-06-02 14:28:31 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:31.300294 | orchestrator | 2025-06-02 14:28:31 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:31.300806 | orchestrator | 2025-06-02 14:28:31 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:31.300929 | orchestrator | 2025-06-02 14:28:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:34.335962 | orchestrator | 2025-06-02 14:28:34 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:34.336320 | orchestrator | 2025-06-02 14:28:34 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:34.337057 | orchestrator | 2025-06-02 14:28:34 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:34.337765 | orchestrator | 2025-06-02 14:28:34 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:34.337787 | orchestrator | 2025-06-02 14:28:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:37.361925 | orchestrator | 2025-06-02 14:28:37 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:37.362418 | orchestrator | 2025-06-02 14:28:37 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:37.363510 | orchestrator | 2025-06-02 14:28:37 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:37.364702 | orchestrator | 2025-06-02 14:28:37 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:37.364789 | orchestrator | 2025-06-02 14:28:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:40.393544 | orchestrator | 2025-06-02 14:28:40 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:40.395046 | orchestrator | 2025-06-02 14:28:40 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:40.395593 | orchestrator | 2025-06-02 14:28:40 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:40.396433 | orchestrator | 2025-06-02 14:28:40 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:40.396469 | orchestrator | 2025-06-02 14:28:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:43.463361 | orchestrator | 2025-06-02 14:28:43 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:43.463449 | orchestrator | 2025-06-02 14:28:43 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:43.463464 | orchestrator | 2025-06-02 14:28:43 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:43.463475 | orchestrator | 2025-06-02 14:28:43 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:43.463486 | orchestrator | 2025-06-02 14:28:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:46.508625 | orchestrator | 2025-06-02 14:28:46 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:46.508797 | orchestrator | 2025-06-02 14:28:46 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:46.510302 | orchestrator | 2025-06-02 14:28:46 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:46.510769 | orchestrator | 2025-06-02 14:28:46 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:46.510867 | orchestrator | 2025-06-02 14:28:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:49.543664 | orchestrator | 2025-06-02 14:28:49 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:49.543796 | orchestrator | 2025-06-02 14:28:49 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:49.545210 | orchestrator | 2025-06-02 14:28:49 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:49.547643 | orchestrator | 2025-06-02 14:28:49 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:49.547674 | orchestrator | 2025-06-02 14:28:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:52.577300 | orchestrator | 2025-06-02 14:28:52 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:52.577870 | orchestrator | 2025-06-02 14:28:52 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:52.578637 | orchestrator | 2025-06-02 14:28:52 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:52.579316 | orchestrator | 2025-06-02 14:28:52 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:52.579354 | orchestrator | 2025-06-02 14:28:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:55.624365 | orchestrator | 2025-06-02 14:28:55 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:55.624556 | orchestrator | 2025-06-02 14:28:55 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:55.626865 | orchestrator | 2025-06-02 14:28:55 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:55.626895 | orchestrator | 2025-06-02 14:28:55 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:55.626907 | orchestrator | 2025-06-02 14:28:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:28:58.668161 | orchestrator | 2025-06-02 14:28:58 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:28:58.668364 | orchestrator | 2025-06-02 14:28:58 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:28:58.669212 | orchestrator | 2025-06-02 14:28:58 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:28:58.670113 | orchestrator | 2025-06-02 14:28:58 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:28:58.673076 | orchestrator | 2025-06-02 14:28:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:01.704439 | orchestrator | 2025-06-02 14:29:01 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:01.704528 | orchestrator | 2025-06-02 14:29:01 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:01.704783 | orchestrator | 2025-06-02 14:29:01 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:01.705519 | orchestrator | 2025-06-02 14:29:01 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:01.705569 | orchestrator | 2025-06-02 14:29:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:04.734935 | orchestrator | 2025-06-02 14:29:04 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:04.735033 | orchestrator | 2025-06-02 14:29:04 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:04.740554 | orchestrator | 2025-06-02 14:29:04 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:04.741215 | orchestrator | 2025-06-02 14:29:04 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:04.741246 | orchestrator | 2025-06-02 14:29:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:07.769280 | orchestrator | 2025-06-02 14:29:07 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:07.769366 | orchestrator | 2025-06-02 14:29:07 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:07.769381 | orchestrator | 2025-06-02 14:29:07 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:07.769393 | orchestrator | 2025-06-02 14:29:07 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:07.769404 | orchestrator | 2025-06-02 14:29:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:10.802792 | orchestrator | 2025-06-02 14:29:10 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:10.804514 | orchestrator | 2025-06-02 14:29:10 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:10.804550 | orchestrator | 2025-06-02 14:29:10 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:10.804973 | orchestrator | 2025-06-02 14:29:10 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:10.804996 | orchestrator | 2025-06-02 14:29:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:13.840640 | orchestrator | 2025-06-02 14:29:13 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:13.840896 | orchestrator | 2025-06-02 14:29:13 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:13.841598 | orchestrator | 2025-06-02 14:29:13 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:13.842404 | orchestrator | 2025-06-02 14:29:13 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:13.842430 | orchestrator | 2025-06-02 14:29:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:16.889021 | orchestrator | 2025-06-02 14:29:16 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:16.892177 | orchestrator | 2025-06-02 14:29:16 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:16.892214 | orchestrator | 2025-06-02 14:29:16 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:16.892226 | orchestrator | 2025-06-02 14:29:16 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:16.892238 | orchestrator | 2025-06-02 14:29:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:19.938069 | orchestrator | 2025-06-02 14:29:19 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:19.939928 | orchestrator | 2025-06-02 14:29:19 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:19.942268 | orchestrator | 2025-06-02 14:29:19 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:19.944121 | orchestrator | 2025-06-02 14:29:19 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:19.944415 | orchestrator | 2025-06-02 14:29:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:22.993400 | orchestrator | 2025-06-02 14:29:22 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:22.996248 | orchestrator | 2025-06-02 14:29:22 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:22.998539 | orchestrator | 2025-06-02 14:29:22 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:23.001100 | orchestrator | 2025-06-02 14:29:22 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:23.001126 | orchestrator | 2025-06-02 14:29:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:26.051133 | orchestrator | 2025-06-02 14:29:26 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:26.052117 | orchestrator | 2025-06-02 14:29:26 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:26.053536 | orchestrator | 2025-06-02 14:29:26 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:26.055067 | orchestrator | 2025-06-02 14:29:26 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:26.055094 | orchestrator | 2025-06-02 14:29:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:29.108416 | orchestrator | 2025-06-02 14:29:29 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:29.109729 | orchestrator | 2025-06-02 14:29:29 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:29.111343 | orchestrator | 2025-06-02 14:29:29 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:29.113865 | orchestrator | 2025-06-02 14:29:29 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:29.113908 | orchestrator | 2025-06-02 14:29:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:32.165252 | orchestrator | 2025-06-02 14:29:32 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:32.168924 | orchestrator | 2025-06-02 14:29:32 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:32.170100 | orchestrator | 2025-06-02 14:29:32 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:32.173572 | orchestrator | 2025-06-02 14:29:32 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:32.173602 | orchestrator | 2025-06-02 14:29:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:35.230883 | orchestrator | 2025-06-02 14:29:35 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:35.233742 | orchestrator | 2025-06-02 14:29:35 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:35.235783 | orchestrator | 2025-06-02 14:29:35 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:35.237421 | orchestrator | 2025-06-02 14:29:35 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:35.237466 | orchestrator | 2025-06-02 14:29:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:38.277077 | orchestrator | 2025-06-02 14:29:38 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:38.277709 | orchestrator | 2025-06-02 14:29:38 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:38.278174 | orchestrator | 2025-06-02 14:29:38 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:38.280264 | orchestrator | 2025-06-02 14:29:38 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:38.280291 | orchestrator | 2025-06-02 14:29:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:41.332596 | orchestrator | 2025-06-02 14:29:41 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:41.332984 | orchestrator | 2025-06-02 14:29:41 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:41.334519 | orchestrator | 2025-06-02 14:29:41 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:41.336529 | orchestrator | 2025-06-02 14:29:41 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:41.336560 | orchestrator | 2025-06-02 14:29:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:44.384969 | orchestrator | 2025-06-02 14:29:44 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:44.385685 | orchestrator | 2025-06-02 14:29:44 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:44.386226 | orchestrator | 2025-06-02 14:29:44 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:44.387134 | orchestrator | 2025-06-02 14:29:44 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:44.387162 | orchestrator | 2025-06-02 14:29:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:47.412799 | orchestrator | 2025-06-02 14:29:47 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:47.416883 | orchestrator | 2025-06-02 14:29:47 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:47.420477 | orchestrator | 2025-06-02 14:29:47 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:47.421583 | orchestrator | 2025-06-02 14:29:47 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:47.421680 | orchestrator | 2025-06-02 14:29:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:50.465960 | orchestrator | 2025-06-02 14:29:50 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:50.466091 | orchestrator | 2025-06-02 14:29:50 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:50.466108 | orchestrator | 2025-06-02 14:29:50 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:50.466121 | orchestrator | 2025-06-02 14:29:50 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:50.466133 | orchestrator | 2025-06-02 14:29:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:53.499166 | orchestrator | 2025-06-02 14:29:53 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:53.501480 | orchestrator | 2025-06-02 14:29:53 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:53.502728 | orchestrator | 2025-06-02 14:29:53 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:53.504003 | orchestrator | 2025-06-02 14:29:53 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:53.504056 | orchestrator | 2025-06-02 14:29:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:56.546252 | orchestrator | 2025-06-02 14:29:56 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:56.549597 | orchestrator | 2025-06-02 14:29:56 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:56.550635 | orchestrator | 2025-06-02 14:29:56 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:56.552982 | orchestrator | 2025-06-02 14:29:56 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:56.553055 | orchestrator | 2025-06-02 14:29:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:29:59.588750 | orchestrator | 2025-06-02 14:29:59 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:29:59.593262 | orchestrator | 2025-06-02 14:29:59 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:29:59.593318 | orchestrator | 2025-06-02 14:29:59 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:29:59.593817 | orchestrator | 2025-06-02 14:29:59 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:29:59.594108 | orchestrator | 2025-06-02 14:29:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:02.636389 | orchestrator | 2025-06-02 14:30:02 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:02.638509 | orchestrator | 2025-06-02 14:30:02 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:02.640816 | orchestrator | 2025-06-02 14:30:02 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:30:02.641901 | orchestrator | 2025-06-02 14:30:02 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:02.641937 | orchestrator | 2025-06-02 14:30:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:05.685532 | orchestrator | 2025-06-02 14:30:05 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:05.687524 | orchestrator | 2025-06-02 14:30:05 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:05.690238 | orchestrator | 2025-06-02 14:30:05 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:30:05.691937 | orchestrator | 2025-06-02 14:30:05 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:05.691968 | orchestrator | 2025-06-02 14:30:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:08.736253 | orchestrator | 2025-06-02 14:30:08 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:08.737653 | orchestrator | 2025-06-02 14:30:08 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:08.738600 | orchestrator | 2025-06-02 14:30:08 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:30:08.739502 | orchestrator | 2025-06-02 14:30:08 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:08.739534 | orchestrator | 2025-06-02 14:30:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:11.777484 | orchestrator | 2025-06-02 14:30:11 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:11.777914 | orchestrator | 2025-06-02 14:30:11 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:11.779499 | orchestrator | 2025-06-02 14:30:11 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:30:11.780448 | orchestrator | 2025-06-02 14:30:11 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:11.780474 | orchestrator | 2025-06-02 14:30:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:14.828497 | orchestrator | 2025-06-02 14:30:14 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:14.828647 | orchestrator | 2025-06-02 14:30:14 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:14.828733 | orchestrator | 2025-06-02 14:30:14 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state STARTED 2025-06-02 14:30:14.829970 | orchestrator | 2025-06-02 14:30:14 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:14.830104 | orchestrator | 2025-06-02 14:30:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:17.862683 | orchestrator | 2025-06-02 14:30:17 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:17.862969 | orchestrator | 2025-06-02 14:30:17 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:17.864824 | orchestrator | 2025-06-02 14:30:17.864850 | orchestrator | 2025-06-02 14:30:17 | INFO  | Task 68f4ecbd-c689-4fba-823a-9a78c9aa5252 is in state SUCCESS 2025-06-02 14:30:17.866719 | orchestrator | 2025-06-02 14:30:17.866818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:30:17.866836 | orchestrator | 2025-06-02 14:30:17.866849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:30:17.866861 | orchestrator | Monday 02 June 2025 14:27:22 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-06-02 14:30:17.866873 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:30:17.866885 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:30:17.866896 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:30:17.866931 | orchestrator | 2025-06-02 14:30:17.866943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:30:17.866972 | orchestrator | Monday 02 June 2025 14:27:23 +0000 (0:00:00.317) 0:00:00.566 *********** 2025-06-02 14:30:17.866984 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 14:30:17.866996 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 14:30:17.867007 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 14:30:17.867018 | orchestrator | 2025-06-02 14:30:17.867030 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 14:30:17.867041 | orchestrator | 2025-06-02 14:30:17.867052 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 14:30:17.867063 | orchestrator | Monday 02 June 2025 14:27:23 +0000 (0:00:00.342) 0:00:00.908 *********** 2025-06-02 14:30:17.867075 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:30:17.867087 | orchestrator | 2025-06-02 14:30:17.867099 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 14:30:17.867110 | orchestrator | Monday 02 June 2025 14:27:24 +0000 (0:00:00.452) 0:00:01.361 *********** 2025-06-02 14:30:17.867121 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 14:30:17.867132 | orchestrator | 2025-06-02 14:30:17.867144 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 14:30:17.867155 | orchestrator | Monday 02 June 2025 14:27:34 +0000 (0:00:10.112) 0:00:11.473 *********** 2025-06-02 14:30:17.867166 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 14:30:17.867178 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 14:30:17.867209 | orchestrator | 2025-06-02 14:30:17.867221 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 14:30:17.867232 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:06.129) 0:00:17.602 *********** 2025-06-02 14:30:17.867243 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 14:30:17.867254 | orchestrator | 2025-06-02 14:30:17.867265 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 14:30:17.867279 | orchestrator | Monday 02 June 2025 14:27:43 +0000 (0:00:03.015) 0:00:20.618 *********** 2025-06-02 14:30:17.867292 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 14:30:17.867304 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 14:30:17.867317 | orchestrator | 2025-06-02 14:30:17.867329 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 14:30:17.867342 | orchestrator | Monday 02 June 2025 14:27:46 +0000 (0:00:03.606) 0:00:24.225 *********** 2025-06-02 14:30:17.867402 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 14:30:17.867416 | orchestrator | 2025-06-02 14:30:17.867430 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 14:30:17.867443 | orchestrator | Monday 02 June 2025 14:27:50 +0000 (0:00:03.331) 0:00:27.556 *********** 2025-06-02 14:30:17.867456 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 14:30:17.867468 | orchestrator | 2025-06-02 14:30:17.867481 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 14:30:17.867493 | orchestrator | Monday 02 June 2025 14:27:54 +0000 (0:00:03.829) 0:00:31.385 *********** 2025-06-02 14:30:17.867536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.867591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.867620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.867634 | orchestrator | 2025-06-02 14:30:17.867646 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 14:30:17.867658 | orchestrator | Monday 02 June 2025 14:27:57 +0000 (0:00:03.467) 0:00:34.853 *********** 2025-06-02 14:30:17.867670 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:30:17.867682 | orchestrator | 2025-06-02 14:30:17.867704 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 14:30:17.867716 | orchestrator | Monday 02 June 2025 14:27:58 +0000 (0:00:00.573) 0:00:35.426 *********** 2025-06-02 14:30:17.867728 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.867740 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:17.867751 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:17.867763 | orchestrator | 2025-06-02 14:30:17.867775 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 14:30:17.867787 | orchestrator | Monday 02 June 2025 14:28:01 +0000 (0:00:03.713) 0:00:39.139 *********** 2025-06-02 14:30:17.867803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867843 | orchestrator | 2025-06-02 14:30:17.867854 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 14:30:17.867865 | orchestrator | Monday 02 June 2025 14:28:03 +0000 (0:00:01.362) 0:00:40.501 *********** 2025-06-02 14:30:17.867876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:30:17.867928 | orchestrator | 2025-06-02 14:30:17.867940 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 14:30:17.867952 | orchestrator | Monday 02 June 2025 14:28:04 +0000 (0:00:00.958) 0:00:41.460 *********** 2025-06-02 14:30:17.867964 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:30:17.867976 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:30:17.867988 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:30:17.868000 | orchestrator | 2025-06-02 14:30:17.868012 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 14:30:17.868023 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:01.287) 0:00:42.747 *********** 2025-06-02 14:30:17.868035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868046 | orchestrator | 2025-06-02 14:30:17.868058 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 14:30:17.868070 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:00.241) 0:00:42.989 *********** 2025-06-02 14:30:17.868082 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868094 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868106 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868118 | orchestrator | 2025-06-02 14:30:17.868129 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 14:30:17.868141 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:00.279) 0:00:43.268 *********** 2025-06-02 14:30:17.868153 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:30:17.868165 | orchestrator | 2025-06-02 14:30:17.868177 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 14:30:17.868188 | orchestrator | Monday 02 June 2025 14:28:06 +0000 (0:00:00.890) 0:00:44.159 *********** 2025-06-02 14:30:17.868209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868262 | orchestrator | 2025-06-02 14:30:17.868274 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 14:30:17.868285 | orchestrator | Monday 02 June 2025 14:28:10 +0000 (0:00:03.787) 0:00:47.947 *********** 2025-06-02 14:30:17.868321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868336 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868361 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868406 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868417 | orchestrator | 2025-06-02 14:30:17.868428 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 14:30:17.868439 | orchestrator | Monday 02 June 2025 14:28:13 +0000 (0:00:02.513) 0:00:50.460 *********** 2025-06-02 14:30:17.868452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868502 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 14:30:17.868531 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868542 | orchestrator | 2025-06-02 14:30:17.868553 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 14:30:17.868564 | orchestrator | Monday 02 June 2025 14:28:16 +0000 (0:00:03.197) 0:00:53.658 *********** 2025-06-02 14:30:17.868575 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868585 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868596 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868607 | orchestrator | 2025-06-02 14:30:17.868618 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 14:30:17.868629 | orchestrator | Monday 02 June 2025 14:28:20 +0000 (0:00:04.046) 0:00:57.704 *********** 2025-06-02 14:30:17.868646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.868709 | orchestrator | 2025-06-02 14:30:17.868720 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 14:30:17.868731 | orchestrator | Monday 02 June 2025 14:28:25 +0000 (0:00:05.188) 0:01:02.893 *********** 2025-06-02 14:30:17.868742 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:17.868753 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.868763 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:17.868774 | orchestrator | 2025-06-02 14:30:17.868785 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 14:30:17.868796 | orchestrator | Monday 02 June 2025 14:28:33 +0000 (0:00:07.761) 0:01:10.654 *********** 2025-06-02 14:30:17.868807 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868818 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868840 | orchestrator | 2025-06-02 14:30:17.868851 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 14:30:17.868869 | orchestrator | Monday 02 June 2025 14:28:38 +0000 (0:00:04.764) 0:01:15.419 *********** 2025-06-02 14:30:17.868880 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868891 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868919 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.868930 | orchestrator | 2025-06-02 14:30:17.868941 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 14:30:17.868952 | orchestrator | Monday 02 June 2025 14:28:42 +0000 (0:00:04.393) 0:01:19.812 *********** 2025-06-02 14:30:17.868963 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.868974 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.868990 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.869001 | orchestrator | 2025-06-02 14:30:17.869012 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 14:30:17.869023 | orchestrator | Monday 02 June 2025 14:28:46 +0000 (0:00:03.650) 0:01:23.463 *********** 2025-06-02 14:30:17.869034 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.869044 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.869055 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.869066 | orchestrator | 2025-06-02 14:30:17.869077 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 14:30:17.869088 | orchestrator | Monday 02 June 2025 14:28:51 +0000 (0:00:05.178) 0:01:28.641 *********** 2025-06-02 14:30:17.869098 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.869109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.869120 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.869131 | orchestrator | 2025-06-02 14:30:17.869142 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 14:30:17.869152 | orchestrator | Monday 02 June 2025 14:28:51 +0000 (0:00:00.313) 0:01:28.954 *********** 2025-06-02 14:30:17.869163 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 14:30:17.869174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.869185 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 14:30:17.869196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.869207 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 14:30:17.869224 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.869235 | orchestrator | 2025-06-02 14:30:17.869246 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 14:30:17.869256 | orchestrator | Monday 02 June 2025 14:28:56 +0000 (0:00:04.832) 0:01:33.787 *********** 2025-06-02 14:30:17.869268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.869294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.869308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 14:30:17.869326 | orchestrator | 2025-06-02 14:30:17.869337 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 14:30:17.869348 | orchestrator | Monday 02 June 2025 14:29:04 +0000 (0:00:08.489) 0:01:42.276 *********** 2025-06-02 14:30:17.869358 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:17.869369 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:17.869380 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:17.869391 | orchestrator | 2025-06-02 14:30:17.869402 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 14:30:17.869412 | orchestrator | Monday 02 June 2025 14:29:06 +0000 (0:00:01.042) 0:01:43.319 *********** 2025-06-02 14:30:17.869423 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869434 | orchestrator | 2025-06-02 14:30:17.869445 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 14:30:17.869455 | orchestrator | Monday 02 June 2025 14:29:08 +0000 (0:00:02.523) 0:01:45.843 *********** 2025-06-02 14:30:17.869466 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869477 | orchestrator | 2025-06-02 14:30:17.869488 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 14:30:17.869499 | orchestrator | Monday 02 June 2025 14:29:10 +0000 (0:00:02.086) 0:01:47.929 *********** 2025-06-02 14:30:17.869510 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869521 | orchestrator | 2025-06-02 14:30:17.869531 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 14:30:17.869542 | orchestrator | Monday 02 June 2025 14:29:12 +0000 (0:00:02.016) 0:01:49.945 *********** 2025-06-02 14:30:17.869553 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869564 | orchestrator | 2025-06-02 14:30:17.869574 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 14:30:17.869585 | orchestrator | Monday 02 June 2025 14:29:38 +0000 (0:00:25.642) 0:02:15.588 *********** 2025-06-02 14:30:17.869596 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869607 | orchestrator | 2025-06-02 14:30:17.869623 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 14:30:17.869635 | orchestrator | Monday 02 June 2025 14:29:41 +0000 (0:00:02.926) 0:02:18.514 *********** 2025-06-02 14:30:17.869646 | orchestrator | 2025-06-02 14:30:17.869656 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 14:30:17.869667 | orchestrator | Monday 02 June 2025 14:29:41 +0000 (0:00:00.087) 0:02:18.602 *********** 2025-06-02 14:30:17.869678 | orchestrator | 2025-06-02 14:30:17.869689 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 14:30:17.869710 | orchestrator | Monday 02 June 2025 14:29:41 +0000 (0:00:00.103) 0:02:18.706 *********** 2025-06-02 14:30:17.869721 | orchestrator | 2025-06-02 14:30:17.869732 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 14:30:17.869743 | orchestrator | Monday 02 June 2025 14:29:41 +0000 (0:00:00.105) 0:02:18.811 *********** 2025-06-02 14:30:17.869754 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:17.869765 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:17.869775 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:17.869786 | orchestrator | 2025-06-02 14:30:17.869797 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:30:17.869809 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 14:30:17.869821 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:30:17.869832 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:30:17.869843 | orchestrator | 2025-06-02 14:30:17.869854 | orchestrator | 2025-06-02 14:30:17.869865 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:30:17.869875 | orchestrator | Monday 02 June 2025 14:30:16 +0000 (0:00:35.399) 0:02:54.211 *********** 2025-06-02 14:30:17.869886 | orchestrator | =============================================================================== 2025-06-02 14:30:17.869897 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.40s 2025-06-02 14:30:17.869953 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.64s 2025-06-02 14:30:17.869964 | orchestrator | service-ks-register : glance | Creating services ----------------------- 10.11s 2025-06-02 14:30:17.869975 | orchestrator | glance : Check glance containers ---------------------------------------- 8.49s 2025-06-02 14:30:17.869986 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.76s 2025-06-02 14:30:17.869997 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.13s 2025-06-02 14:30:17.870007 | orchestrator | glance : Copying over config.json files for services -------------------- 5.19s 2025-06-02 14:30:17.870065 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.18s 2025-06-02 14:30:17.870079 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.83s 2025-06-02 14:30:17.870091 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.76s 2025-06-02 14:30:17.870102 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.39s 2025-06-02 14:30:17.870113 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.05s 2025-06-02 14:30:17.870123 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.83s 2025-06-02 14:30:17.870134 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.79s 2025-06-02 14:30:17.870156 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.71s 2025-06-02 14:30:17.870168 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.65s 2025-06-02 14:30:17.870178 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.61s 2025-06-02 14:30:17.870189 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.47s 2025-06-02 14:30:17.870200 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.33s 2025-06-02 14:30:17.870211 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.20s 2025-06-02 14:30:17.870410 | orchestrator | 2025-06-02 14:30:17 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:17.870482 | orchestrator | 2025-06-02 14:30:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:20.923337 | orchestrator | 2025-06-02 14:30:20 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state STARTED 2025-06-02 14:30:20.924410 | orchestrator | 2025-06-02 14:30:20 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:20.926065 | orchestrator | 2025-06-02 14:30:20 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:20.930098 | orchestrator | 2025-06-02 14:30:20 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:20.930699 | orchestrator | 2025-06-02 14:30:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:24.002833 | orchestrator | 2025-06-02 14:30:24.002980 | orchestrator | 2025-06-02 14:30:24.002999 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:30:24.003012 | orchestrator | 2025-06-02 14:30:24.003024 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:30:24.003035 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-06-02 14:30:24.003047 | orchestrator | ok: [testbed-manager] 2025-06-02 14:30:24.003059 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:30:24.003071 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:30:24.003082 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:30:24.003093 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:30:24.003103 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:30:24.003114 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:30:24.003125 | orchestrator | 2025-06-02 14:30:24.003136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:30:24.003163 | orchestrator | Monday 02 June 2025 14:27:16 +0000 (0:00:00.726) 0:00:00.969 *********** 2025-06-02 14:30:24.003176 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003188 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003199 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003211 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003222 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003233 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003244 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 14:30:24.003255 | orchestrator | 2025-06-02 14:30:24.003266 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 14:30:24.003276 | orchestrator | 2025-06-02 14:30:24.003288 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 14:30:24.003299 | orchestrator | Monday 02 June 2025 14:27:17 +0000 (0:00:00.695) 0:00:01.664 *********** 2025-06-02 14:30:24.003311 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:30:24.003323 | orchestrator | 2025-06-02 14:30:24.003335 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 14:30:24.003345 | orchestrator | Monday 02 June 2025 14:27:19 +0000 (0:00:01.495) 0:00:03.160 *********** 2025-06-02 14:30:24.003360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003414 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:30:24.003438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003498 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003607 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:30:24.003629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.003689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003704 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.003867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.003879 | orchestrator | 2025-06-02 14:30:24.003890 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 14:30:24.003901 | orchestrator | Monday 02 June 2025 14:27:22 +0000 (0:00:03.852) 0:00:07.012 *********** 2025-06-02 14:30:24.003931 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:30:24.003943 | orchestrator | 2025-06-02 14:30:24.003954 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 14:30:24.003973 | orchestrator | Monday 02 June 2025 14:27:24 +0000 (0:00:01.477) 0:00:08.489 *********** 2025-06-02 14:30:24.003985 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:30:24.003997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004084 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.004096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004148 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004248 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:30:24.004272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.004347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.004365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 2025-06-02 14:30:23 | INFO  | Task c6c0c87e-b4be-4297-84d2-07b870d3e237 is in state SUCCESS 2025-06-02 14:30:24.004983 | orchestrator | 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.005512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.005587 | orchestrator | 2025-06-02 14:30:24.005610 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 14:30:24.005631 | orchestrator | Monday 02 June 2025 14:27:30 +0000 (0:00:05.870) 0:00:14.360 *********** 2025-06-02 14:30:24.005653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.005675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.005696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.005717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.005739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.005786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 14:30:24.005827 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.005848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.005871 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 14:30:24.005893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.005958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.005981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006159 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.006181 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.006203 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.006226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006292 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.006314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006436 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.006456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006508 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.006520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006614 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.006634 | orchestrator | 2025-06-02 14:30:24.006654 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 14:30:24.006669 | orchestrator | Monday 02 June 2025 14:27:31 +0000 (0:00:01.707) 0:00:16.068 *********** 2025-06-02 14:30:24.006688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.006802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 14:30:24.006854 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.006887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.006907 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 14:30:24.007011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007032 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.007053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.007091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007185 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.007197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.007208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.007220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 14:30:24.007322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.007342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.007360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007383 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.007395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.007406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.007449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 14:30:24.007461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 14:30:24.007491 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.007502 | orchestrator | 2025-06-02 14:30:24.007514 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 14:30:24.007525 | orchestrator | Monday 02 June 2025 14:27:33 +0000 (0:00:01.853) 0:00:17.921 *********** 2025-06-02 14:30:24.007542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:30:24.007554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007671 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.007714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.007764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.007806 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.007837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.007866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.007974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.007994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.008014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:30:24.008046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.008074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.008093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.008114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.008143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.008163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.008183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.008204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.008225 | orchestrator | 2025-06-02 14:30:24.008244 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 14:30:24.008261 | orchestrator | Monday 02 June 2025 14:27:39 +0000 (0:00:05.484) 0:00:23.406 *********** 2025-06-02 14:30:24.008272 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:30:24.008283 | orchestrator | 2025-06-02 14:30:24.008294 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 14:30:24.008313 | orchestrator | Monday 02 June 2025 14:27:40 +0000 (0:00:00.861) 0:00:24.267 *********** 2025-06-02 14:30:24.008332 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008345 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008376 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.008387 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008417 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008434 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008446 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008465 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096001, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008477 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008488 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008500 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008541 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008554 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008572 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008584 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008608 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095987, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.008621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008653 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008674 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008685 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008720 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008732 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008787 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008818 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095957, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.008829 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008841 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.008871 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009023 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009054 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009074 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009094 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009114 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009134 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009168 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009209 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009228 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009240 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095959, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.009250 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009272 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009295 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009311 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009321 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009332 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009342 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009352 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009363 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009386 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009423 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009435 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009445 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009455 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095980, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.009466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009483 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009521 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009540 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009558 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009647 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009882 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009911 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009953 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009964 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.009974 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095963, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.009989 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010095 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010134 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010162 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010181 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010199 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010218 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010238 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010270 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095975, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6981149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.010300 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010327 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010346 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010365 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010384 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010403 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010429 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010476 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010517 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010577 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010597 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010615 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.010643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010667 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010686 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095990, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.700115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.010704 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010722 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010751 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010769 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010795 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010820 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010840 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010860 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010878 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010952 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.010964 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.010974 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.010990 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.011001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.011017 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.011027 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.011038 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.011048 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095999, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011068 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 14:30:24.011079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.011089 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096375, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011099 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095992, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.7011151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011116 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095961, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.6951149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011132 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095969, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.697115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011143 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095955, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.694115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095983, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.699115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011171 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096371, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.765116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011182 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095967, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.696115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011192 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096003, 'dev': 80, 'nlink': 1, 'atime': 1748822535.0, 'mtime': 1748822535.0, 'ctime': 1748868605.702115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 14:30:24.011202 | orchestrator | 2025-06-02 14:30:24.011215 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 14:30:24.011230 | orchestrator | Monday 02 June 2025 14:28:03 +0000 (0:00:23.314) 0:00:47.581 *********** 2025-06-02 14:30:24.011241 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:30:24.011251 | orchestrator | 2025-06-02 14:30:24.011261 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 14:30:24.011271 | orchestrator | Monday 02 June 2025 14:28:04 +0000 (0:00:00.659) 0:00:48.241 *********** 2025-06-02 14:30:24.011281 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011308 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011327 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011343 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011360 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011374 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011399 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011408 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011436 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011466 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011484 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011501 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011534 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011551 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011567 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011586 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011607 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011637 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011655 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011691 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011726 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011740 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.011750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011760 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 14:30:24.011770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 14:30:24.011780 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 14:30:24.011790 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:30:24.011800 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 14:30:24.011810 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 14:30:24.011819 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:30:24.011829 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 14:30:24.011839 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 14:30:24.011848 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 14:30:24.011858 | orchestrator | 2025-06-02 14:30:24.011868 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 14:30:24.011878 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:01.827) 0:00:50.069 *********** 2025-06-02 14:30:24.011887 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.011897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.011907 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.011942 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.011953 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.011963 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.011973 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.011983 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.011992 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.012002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012012 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 14:30:24.012022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012032 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 14:30:24.012049 | orchestrator | 2025-06-02 14:30:24.012059 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 14:30:24.012069 | orchestrator | Monday 02 June 2025 14:28:21 +0000 (0:00:15.812) 0:01:05.882 *********** 2025-06-02 14:30:24.012079 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012089 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.012099 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.012127 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012137 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.012147 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.012171 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012183 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012199 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 14:30:24.012209 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012219 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 14:30:24.012232 | orchestrator | 2025-06-02 14:30:24.012247 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 14:30:24.012265 | orchestrator | Monday 02 June 2025 14:28:26 +0000 (0:00:04.298) 0:01:10.181 *********** 2025-06-02 14:30:24.012279 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012290 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012300 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012310 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012320 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.012329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.012339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.012349 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.012359 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 14:30:24.012369 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012379 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012389 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 14:30:24.012399 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012408 | orchestrator | 2025-06-02 14:30:24.012418 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 14:30:24.012428 | orchestrator | Monday 02 June 2025 14:28:28 +0000 (0:00:02.459) 0:01:12.640 *********** 2025-06-02 14:30:24.012438 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:30:24.012447 | orchestrator | 2025-06-02 14:30:24.012457 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 14:30:24.012467 | orchestrator | Monday 02 June 2025 14:28:29 +0000 (0:00:00.921) 0:01:13.561 *********** 2025-06-02 14:30:24.012484 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.012494 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.012503 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.012513 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.012523 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.012533 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012542 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012552 | orchestrator | 2025-06-02 14:30:24.012562 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 14:30:24.012572 | orchestrator | Monday 02 June 2025 14:28:30 +0000 (0:00:00.711) 0:01:14.272 *********** 2025-06-02 14:30:24.012582 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.012591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012601 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.012611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.012620 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012630 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.012646 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.012656 | orchestrator | 2025-06-02 14:30:24.012666 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 14:30:24.012676 | orchestrator | Monday 02 June 2025 14:28:32 +0000 (0:00:02.765) 0:01:17.038 *********** 2025-06-02 14:30:24.012694 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012712 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012729 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.012746 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.012758 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012769 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.012787 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012805 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.012822 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012840 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.012859 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012884 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.012900 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 14:30:24.012984 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.012998 | orchestrator | 2025-06-02 14:30:24.013008 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 14:30:24.013025 | orchestrator | Monday 02 June 2025 14:28:34 +0000 (0:00:01.885) 0:01:18.923 *********** 2025-06-02 14:30:24.013043 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013070 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.013089 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013107 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.013124 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.013155 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 14:30:24.013171 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013189 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.013210 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013220 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 14:30:24.013230 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.013240 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.013250 | orchestrator | 2025-06-02 14:30:24.013260 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 14:30:24.013270 | orchestrator | Monday 02 June 2025 14:28:36 +0000 (0:00:02.136) 0:01:21.059 *********** 2025-06-02 14:30:24.013280 | orchestrator | [WARNING]: Skipped 2025-06-02 14:30:24.013290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 14:30:24.013300 | orchestrator | due to this access issue: 2025-06-02 14:30:24.013310 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 14:30:24.013319 | orchestrator | not a directory 2025-06-02 14:30:24.013332 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 14:30:24.013341 | orchestrator | 2025-06-02 14:30:24.013349 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 14:30:24.013357 | orchestrator | Monday 02 June 2025 14:28:38 +0000 (0:00:01.377) 0:01:22.437 *********** 2025-06-02 14:30:24.013365 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.013373 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.013381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.013388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.013399 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.013414 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.013428 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.013442 | orchestrator | 2025-06-02 14:30:24.013453 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 14:30:24.013461 | orchestrator | Monday 02 June 2025 14:28:39 +0000 (0:00:01.010) 0:01:23.447 *********** 2025-06-02 14:30:24.013470 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.013478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:30:24.013485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:30:24.013493 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:30:24.013501 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:30:24.013509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:30:24.013516 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:30:24.013524 | orchestrator | 2025-06-02 14:30:24.013532 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 14:30:24.013540 | orchestrator | Monday 02 June 2025 14:28:40 +0000 (0:00:00.747) 0:01:24.195 *********** 2025-06-02 14:30:24.013549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 14:30:24.013566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 14:30:24.013636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013694 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 14:30:24.013781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 14:30:24.013837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 14:30:24.013862 | orchestrator | 2025-06-02 14:30:24.013871 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 14:30:24.013879 | orchestrator | Monday 02 June 2025 14:28:44 +0000 (0:00:04.255) 0:01:28.451 *********** 2025-06-02 14:30:24.013887 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 14:30:24.013895 | orchestrator | skipping: [testbed-manager] 2025-06-02 14:30:24.013903 | orchestrator | 2025-06-02 14:30:24.013911 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.013943 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:01.385) 0:01:29.837 *********** 2025-06-02 14:30:24.013957 | orchestrator | 2025-06-02 14:30:24.013969 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.013977 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:00.053) 0:01:29.891 *********** 2025-06-02 14:30:24.013985 | orchestrator | 2025-06-02 14:30:24.013993 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.014000 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:00.051) 0:01:29.942 *********** 2025-06-02 14:30:24.014055 | orchestrator | 2025-06-02 14:30:24.014067 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.014075 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:00.058) 0:01:30.000 *********** 2025-06-02 14:30:24.014084 | orchestrator | 2025-06-02 14:30:24.014092 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.014100 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:00.082) 0:01:30.083 *********** 2025-06-02 14:30:24.014109 | orchestrator | 2025-06-02 14:30:24.014123 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.014136 | orchestrator | Monday 02 June 2025 14:28:45 +0000 (0:00:00.065) 0:01:30.148 *********** 2025-06-02 14:30:24.014148 | orchestrator | 2025-06-02 14:30:24.014156 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 14:30:24.014164 | orchestrator | Monday 02 June 2025 14:28:46 +0000 (0:00:00.170) 0:01:30.318 *********** 2025-06-02 14:30:24.014172 | orchestrator | 2025-06-02 14:30:24.014180 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 14:30:24.014194 | orchestrator | Monday 02 June 2025 14:28:46 +0000 (0:00:00.078) 0:01:30.397 *********** 2025-06-02 14:30:24.014208 | orchestrator | changed: [testbed-manager] 2025-06-02 14:30:24.014222 | orchestrator | 2025-06-02 14:30:24.014236 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 14:30:24.014252 | orchestrator | Monday 02 June 2025 14:29:01 +0000 (0:00:15.147) 0:01:45.544 *********** 2025-06-02 14:30:24.014266 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.014281 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.014295 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:30:24.014308 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:30:24.014328 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:30:24.014342 | orchestrator | changed: [testbed-manager] 2025-06-02 14:30:24.014356 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.014371 | orchestrator | 2025-06-02 14:30:24.014386 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 14:30:24.014402 | orchestrator | Monday 02 June 2025 14:29:16 +0000 (0:00:15.017) 0:02:00.561 *********** 2025-06-02 14:30:24.014415 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.014430 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.014444 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.014457 | orchestrator | 2025-06-02 14:30:24.014471 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 14:30:24.014491 | orchestrator | Monday 02 June 2025 14:29:26 +0000 (0:00:10.066) 0:02:10.628 *********** 2025-06-02 14:30:24.014505 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.014518 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.014532 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.014545 | orchestrator | 2025-06-02 14:30:24.014558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 14:30:24.014572 | orchestrator | Monday 02 June 2025 14:29:36 +0000 (0:00:09.825) 0:02:20.453 *********** 2025-06-02 14:30:24.014586 | orchestrator | changed: [testbed-manager] 2025-06-02 14:30:24.014599 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.014613 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.014627 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:30:24.014641 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:30:24.014656 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:30:24.014671 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.014685 | orchestrator | 2025-06-02 14:30:24.014700 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 14:30:24.014714 | orchestrator | Monday 02 June 2025 14:29:50 +0000 (0:00:14.097) 0:02:34.551 *********** 2025-06-02 14:30:24.014727 | orchestrator | changed: [testbed-manager] 2025-06-02 14:30:24.014740 | orchestrator | 2025-06-02 14:30:24.014753 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 14:30:24.014777 | orchestrator | Monday 02 June 2025 14:29:56 +0000 (0:00:06.450) 0:02:41.002 *********** 2025-06-02 14:30:24.014792 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:30:24.014807 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:30:24.014820 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:30:24.014828 | orchestrator | 2025-06-02 14:30:24.014836 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 14:30:24.014844 | orchestrator | Monday 02 June 2025 14:30:08 +0000 (0:00:11.509) 0:02:52.511 *********** 2025-06-02 14:30:24.014852 | orchestrator | changed: [testbed-manager] 2025-06-02 14:30:24.014860 | orchestrator | 2025-06-02 14:30:24.014868 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 14:30:24.014876 | orchestrator | Monday 02 June 2025 14:30:13 +0000 (0:00:05.001) 0:02:57.513 *********** 2025-06-02 14:30:24.014884 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:30:24.014897 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:30:24.014908 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:30:24.014943 | orchestrator | 2025-06-02 14:30:24.014953 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:30:24.014961 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 14:30:24.014976 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:30:24.014985 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:30:24.014993 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:30:24.015001 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 14:30:24.015009 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 14:30:24.015017 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 14:30:24.015025 | orchestrator | 2025-06-02 14:30:24.015033 | orchestrator | 2025-06-02 14:30:24.015041 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:30:24.015049 | orchestrator | Monday 02 June 2025 14:30:20 +0000 (0:00:06.981) 0:03:04.494 *********** 2025-06-02 14:30:24.015063 | orchestrator | =============================================================================== 2025-06-02 14:30:24.015077 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.31s 2025-06-02 14:30:24.015086 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.81s 2025-06-02 14:30:24.015094 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.15s 2025-06-02 14:30:24.015102 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.02s 2025-06-02 14:30:24.015110 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.10s 2025-06-02 14:30:24.015118 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.51s 2025-06-02 14:30:24.015126 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.07s 2025-06-02 14:30:24.015143 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.83s 2025-06-02 14:30:24.015156 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.98s 2025-06-02 14:30:24.015170 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.45s 2025-06-02 14:30:24.015195 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.87s 2025-06-02 14:30:24.015209 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.48s 2025-06-02 14:30:24.015221 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.00s 2025-06-02 14:30:24.015240 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.30s 2025-06-02 14:30:24.015255 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.26s 2025-06-02 14:30:24.015265 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.85s 2025-06-02 14:30:24.015273 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.77s 2025-06-02 14:30:24.015281 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.46s 2025-06-02 14:30:24.015290 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.14s 2025-06-02 14:30:24.015305 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.89s 2025-06-02 14:30:24.015320 | orchestrator | 2025-06-02 14:30:23 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:24.015335 | orchestrator | 2025-06-02 14:30:24 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:24.015350 | orchestrator | 2025-06-02 14:30:24 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:24.015364 | orchestrator | 2025-06-02 14:30:24 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:24.015377 | orchestrator | 2025-06-02 14:30:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:27.052131 | orchestrator | 2025-06-02 14:30:27 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:27.054163 | orchestrator | 2025-06-02 14:30:27 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:27.057768 | orchestrator | 2025-06-02 14:30:27 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:27.059664 | orchestrator | 2025-06-02 14:30:27 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:27.060591 | orchestrator | 2025-06-02 14:30:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:30.102265 | orchestrator | 2025-06-02 14:30:30 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:30.102378 | orchestrator | 2025-06-02 14:30:30 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:30.103657 | orchestrator | 2025-06-02 14:30:30 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:30.104593 | orchestrator | 2025-06-02 14:30:30 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:30.104624 | orchestrator | 2025-06-02 14:30:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:33.143286 | orchestrator | 2025-06-02 14:30:33 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:33.143892 | orchestrator | 2025-06-02 14:30:33 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:33.145365 | orchestrator | 2025-06-02 14:30:33 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:33.146679 | orchestrator | 2025-06-02 14:30:33 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:33.146709 | orchestrator | 2025-06-02 14:30:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:36.191989 | orchestrator | 2025-06-02 14:30:36 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:36.193140 | orchestrator | 2025-06-02 14:30:36 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:36.194773 | orchestrator | 2025-06-02 14:30:36 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:36.196155 | orchestrator | 2025-06-02 14:30:36 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:36.196177 | orchestrator | 2025-06-02 14:30:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:39.240219 | orchestrator | 2025-06-02 14:30:39 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:39.241135 | orchestrator | 2025-06-02 14:30:39 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:39.243090 | orchestrator | 2025-06-02 14:30:39 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:39.244988 | orchestrator | 2025-06-02 14:30:39 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:39.245016 | orchestrator | 2025-06-02 14:30:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:42.296869 | orchestrator | 2025-06-02 14:30:42 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:42.298574 | orchestrator | 2025-06-02 14:30:42 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:42.300555 | orchestrator | 2025-06-02 14:30:42 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:42.302104 | orchestrator | 2025-06-02 14:30:42 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:42.302128 | orchestrator | 2025-06-02 14:30:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:45.343874 | orchestrator | 2025-06-02 14:30:45 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:45.344457 | orchestrator | 2025-06-02 14:30:45 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:45.345366 | orchestrator | 2025-06-02 14:30:45 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:45.346862 | orchestrator | 2025-06-02 14:30:45 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:45.346870 | orchestrator | 2025-06-02 14:30:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:48.393216 | orchestrator | 2025-06-02 14:30:48 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:48.394691 | orchestrator | 2025-06-02 14:30:48 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:48.399476 | orchestrator | 2025-06-02 14:30:48 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:48.402170 | orchestrator | 2025-06-02 14:30:48 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:48.402217 | orchestrator | 2025-06-02 14:30:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:51.453971 | orchestrator | 2025-06-02 14:30:51 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:51.455415 | orchestrator | 2025-06-02 14:30:51 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:51.458124 | orchestrator | 2025-06-02 14:30:51 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:51.460109 | orchestrator | 2025-06-02 14:30:51 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:51.460297 | orchestrator | 2025-06-02 14:30:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:54.512731 | orchestrator | 2025-06-02 14:30:54 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:54.514510 | orchestrator | 2025-06-02 14:30:54 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:54.516363 | orchestrator | 2025-06-02 14:30:54 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:54.518460 | orchestrator | 2025-06-02 14:30:54 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:54.518733 | orchestrator | 2025-06-02 14:30:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:30:57.572564 | orchestrator | 2025-06-02 14:30:57 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:30:57.572669 | orchestrator | 2025-06-02 14:30:57 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:30:57.573084 | orchestrator | 2025-06-02 14:30:57 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:30:57.574087 | orchestrator | 2025-06-02 14:30:57 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:30:57.574111 | orchestrator | 2025-06-02 14:30:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:00.627197 | orchestrator | 2025-06-02 14:31:00 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:00.627304 | orchestrator | 2025-06-02 14:31:00 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:00.628151 | orchestrator | 2025-06-02 14:31:00 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:00.629395 | orchestrator | 2025-06-02 14:31:00 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:00.631406 | orchestrator | 2025-06-02 14:31:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:03.660661 | orchestrator | 2025-06-02 14:31:03 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:03.660919 | orchestrator | 2025-06-02 14:31:03 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:03.661636 | orchestrator | 2025-06-02 14:31:03 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:03.664697 | orchestrator | 2025-06-02 14:31:03 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:03.664745 | orchestrator | 2025-06-02 14:31:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:06.700399 | orchestrator | 2025-06-02 14:31:06 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:06.700512 | orchestrator | 2025-06-02 14:31:06 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:06.700998 | orchestrator | 2025-06-02 14:31:06 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:06.701783 | orchestrator | 2025-06-02 14:31:06 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:06.701813 | orchestrator | 2025-06-02 14:31:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:09.727136 | orchestrator | 2025-06-02 14:31:09 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:09.727687 | orchestrator | 2025-06-02 14:31:09 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:09.728433 | orchestrator | 2025-06-02 14:31:09 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:09.729174 | orchestrator | 2025-06-02 14:31:09 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:09.729289 | orchestrator | 2025-06-02 14:31:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:12.757856 | orchestrator | 2025-06-02 14:31:12 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:12.758250 | orchestrator | 2025-06-02 14:31:12 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:12.759294 | orchestrator | 2025-06-02 14:31:12 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:12.760061 | orchestrator | 2025-06-02 14:31:12 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:12.760130 | orchestrator | 2025-06-02 14:31:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:15.786711 | orchestrator | 2025-06-02 14:31:15 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:15.787076 | orchestrator | 2025-06-02 14:31:15 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:15.787732 | orchestrator | 2025-06-02 14:31:15 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:15.788456 | orchestrator | 2025-06-02 14:31:15 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:15.789351 | orchestrator | 2025-06-02 14:31:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:18.827086 | orchestrator | 2025-06-02 14:31:18 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:18.828170 | orchestrator | 2025-06-02 14:31:18 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:18.829512 | orchestrator | 2025-06-02 14:31:18 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:18.834203 | orchestrator | 2025-06-02 14:31:18 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:18.834284 | orchestrator | 2025-06-02 14:31:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:21.858926 | orchestrator | 2025-06-02 14:31:21 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:21.859010 | orchestrator | 2025-06-02 14:31:21 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:21.859452 | orchestrator | 2025-06-02 14:31:21 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:21.860220 | orchestrator | 2025-06-02 14:31:21 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:21.860242 | orchestrator | 2025-06-02 14:31:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:24.889310 | orchestrator | 2025-06-02 14:31:24 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:24.890747 | orchestrator | 2025-06-02 14:31:24 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:24.891256 | orchestrator | 2025-06-02 14:31:24 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:24.892018 | orchestrator | 2025-06-02 14:31:24 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state STARTED 2025-06-02 14:31:24.893384 | orchestrator | 2025-06-02 14:31:24 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:24.893409 | orchestrator | 2025-06-02 14:31:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:27.931295 | orchestrator | 2025-06-02 14:31:27 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:27.931380 | orchestrator | 2025-06-02 14:31:27 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:27.932118 | orchestrator | 2025-06-02 14:31:27 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:27.933650 | orchestrator | 2025-06-02 14:31:27 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state STARTED 2025-06-02 14:31:27.934328 | orchestrator | 2025-06-02 14:31:27 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:27.935411 | orchestrator | 2025-06-02 14:31:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:30.964733 | orchestrator | 2025-06-02 14:31:30 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:30.964903 | orchestrator | 2025-06-02 14:31:30 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:30.965711 | orchestrator | 2025-06-02 14:31:30 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:30.966490 | orchestrator | 2025-06-02 14:31:30 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state STARTED 2025-06-02 14:31:30.967323 | orchestrator | 2025-06-02 14:31:30 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:30.967535 | orchestrator | 2025-06-02 14:31:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:34.000031 | orchestrator | 2025-06-02 14:31:33 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:34.000976 | orchestrator | 2025-06-02 14:31:33 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:34.001665 | orchestrator | 2025-06-02 14:31:33 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:34.009401 | orchestrator | 2025-06-02 14:31:34 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state STARTED 2025-06-02 14:31:34.011747 | orchestrator | 2025-06-02 14:31:34 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:34.011786 | orchestrator | 2025-06-02 14:31:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:37.043595 | orchestrator | 2025-06-02 14:31:37 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:37.043689 | orchestrator | 2025-06-02 14:31:37 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:37.043941 | orchestrator | 2025-06-02 14:31:37 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:37.044690 | orchestrator | 2025-06-02 14:31:37 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state STARTED 2025-06-02 14:31:37.046232 | orchestrator | 2025-06-02 14:31:37 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:37.046319 | orchestrator | 2025-06-02 14:31:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:40.083259 | orchestrator | 2025-06-02 14:31:40 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:40.083344 | orchestrator | 2025-06-02 14:31:40 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:40.083625 | orchestrator | 2025-06-02 14:31:40 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:40.083895 | orchestrator | 2025-06-02 14:31:40 | INFO  | Task 29a16ef3-f790-490c-9ac2-bb5c9aa52e8d is in state SUCCESS 2025-06-02 14:31:40.084572 | orchestrator | 2025-06-02 14:31:40 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:40.084619 | orchestrator | 2025-06-02 14:31:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:43.115851 | orchestrator | 2025-06-02 14:31:43 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state STARTED 2025-06-02 14:31:43.116038 | orchestrator | 2025-06-02 14:31:43 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:43.116583 | orchestrator | 2025-06-02 14:31:43 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:43.117282 | orchestrator | 2025-06-02 14:31:43 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:43.117305 | orchestrator | 2025-06-02 14:31:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:46.164307 | orchestrator | 2025-06-02 14:31:46 | INFO  | Task 6eb9b2d7-7165-4fd0-a339-28c0eaaa81f8 is in state SUCCESS 2025-06-02 14:31:46.165357 | orchestrator | 2025-06-02 14:31:46.165470 | orchestrator | None 2025-06-02 14:31:46.165485 | orchestrator | 2025-06-02 14:31:46.165497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:31:46.165510 | orchestrator | 2025-06-02 14:31:46.165521 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:31:46.165545 | orchestrator | Monday 02 June 2025 14:27:54 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-06-02 14:31:46.165558 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:31:46.165571 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:31:46.165582 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:31:46.165594 | orchestrator | ok: [testbed-node-3] 2025-06-02 14:31:46.165630 | orchestrator | ok: [testbed-node-4] 2025-06-02 14:31:46.165642 | orchestrator | ok: [testbed-node-5] 2025-06-02 14:31:46.165745 | orchestrator | 2025-06-02 14:31:46.165759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:31:46.165770 | orchestrator | Monday 02 June 2025 14:27:55 +0000 (0:00:00.699) 0:00:00.967 *********** 2025-06-02 14:31:46.165781 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 14:31:46.165793 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 14:31:46.165804 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 14:31:46.165904 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 14:31:46.165916 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 14:31:46.165927 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 14:31:46.165939 | orchestrator | 2025-06-02 14:31:46.165980 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 14:31:46.165993 | orchestrator | 2025-06-02 14:31:46.166004 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 14:31:46.166198 | orchestrator | Monday 02 June 2025 14:27:55 +0000 (0:00:00.693) 0:00:01.660 *********** 2025-06-02 14:31:46.166218 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:31:46.166230 | orchestrator | 2025-06-02 14:31:46.166241 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 14:31:46.166253 | orchestrator | Monday 02 June 2025 14:27:56 +0000 (0:00:01.177) 0:00:02.838 *********** 2025-06-02 14:31:46.166264 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 14:31:46.166275 | orchestrator | 2025-06-02 14:31:46.166286 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 14:31:46.166296 | orchestrator | Monday 02 June 2025 14:27:59 +0000 (0:00:02.779) 0:00:05.617 *********** 2025-06-02 14:31:46.166307 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 14:31:46.166319 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 14:31:46.166355 | orchestrator | 2025-06-02 14:31:46.166366 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 14:31:46.166377 | orchestrator | Monday 02 June 2025 14:28:05 +0000 (0:00:05.401) 0:00:11.019 *********** 2025-06-02 14:31:46.166388 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 14:31:46.166399 | orchestrator | 2025-06-02 14:31:46.166410 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 14:31:46.166421 | orchestrator | Monday 02 June 2025 14:28:07 +0000 (0:00:02.813) 0:00:13.833 *********** 2025-06-02 14:31:46.166431 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 14:31:46.166442 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 14:31:46.166453 | orchestrator | 2025-06-02 14:31:46.166464 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 14:31:46.166475 | orchestrator | Monday 02 June 2025 14:28:11 +0000 (0:00:03.505) 0:00:17.338 *********** 2025-06-02 14:31:46.166485 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 14:31:46.166496 | orchestrator | 2025-06-02 14:31:46.166507 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 14:31:46.166518 | orchestrator | Monday 02 June 2025 14:28:14 +0000 (0:00:03.384) 0:00:20.722 *********** 2025-06-02 14:31:46.166529 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 14:31:46.166540 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 14:31:46.166550 | orchestrator | 2025-06-02 14:31:46.166561 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 14:31:46.166572 | orchestrator | Monday 02 June 2025 14:28:22 +0000 (0:00:08.023) 0:00:28.746 *********** 2025-06-02 14:31:46.166618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.166653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.166665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.166685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.166893 | orchestrator | 2025-06-02 14:31:46.166934 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 14:31:46.166955 | orchestrator | Monday 02 June 2025 14:28:25 +0000 (0:00:02.895) 0:00:31.642 *********** 2025-06-02 14:31:46.166967 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.166978 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.166989 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.167000 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.167011 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.167022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.167033 | orchestrator | 2025-06-02 14:31:46.167044 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 14:31:46.167055 | orchestrator | Monday 02 June 2025 14:28:26 +0000 (0:00:00.869) 0:00:32.512 *********** 2025-06-02 14:31:46.167066 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.167085 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.167122 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.167133 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:31:46.167144 | orchestrator | 2025-06-02 14:31:46.167155 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 14:31:46.167166 | orchestrator | Monday 02 June 2025 14:28:27 +0000 (0:00:01.392) 0:00:33.904 *********** 2025-06-02 14:31:46.167177 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 14:31:46.167189 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 14:31:46.167199 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 14:31:46.167210 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 14:31:46.167221 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 14:31:46.167232 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 14:31:46.167243 | orchestrator | 2025-06-02 14:31:46.167254 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 14:31:46.167265 | orchestrator | Monday 02 June 2025 14:28:29 +0000 (0:00:01.998) 0:00:35.902 *********** 2025-06-02 14:31:46.167277 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167291 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167308 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167329 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167348 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167360 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 14:31:46.167372 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167389 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167408 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167428 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167441 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167470 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 14:31:46.167482 | orchestrator | 2025-06-02 14:31:46.167494 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 14:31:46.167505 | orchestrator | Monday 02 June 2025 14:28:34 +0000 (0:00:04.357) 0:00:40.260 *********** 2025-06-02 14:31:46.167516 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:31:46.167528 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:31:46.167539 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 14:31:46.167549 | orchestrator | 2025-06-02 14:31:46.167565 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 14:31:46.167576 | orchestrator | Monday 02 June 2025 14:28:36 +0000 (0:00:02.291) 0:00:42.551 *********** 2025-06-02 14:31:46.167595 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 14:31:46.167605 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 14:31:46.167616 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 14:31:46.167627 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:31:46.167638 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:31:46.167656 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 14:31:46.167667 | orchestrator | 2025-06-02 14:31:46.167678 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 14:31:46.167689 | orchestrator | Monday 02 June 2025 14:28:39 +0000 (0:00:02.917) 0:00:45.468 *********** 2025-06-02 14:31:46.167700 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 14:31:46.167711 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 14:31:46.167722 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 14:31:46.167733 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 14:31:46.167744 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 14:31:46.167754 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 14:31:46.167771 | orchestrator | 2025-06-02 14:31:46.167790 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 14:31:46.167802 | orchestrator | Monday 02 June 2025 14:28:40 +0000 (0:00:01.222) 0:00:46.691 *********** 2025-06-02 14:31:46.167813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.167824 | orchestrator | 2025-06-02 14:31:46.167835 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 14:31:46.167845 | orchestrator | Monday 02 June 2025 14:28:41 +0000 (0:00:00.282) 0:00:46.973 *********** 2025-06-02 14:31:46.167856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.167867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.167877 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.167888 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.167899 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.167909 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.167920 | orchestrator | 2025-06-02 14:31:46.167930 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 14:31:46.167941 | orchestrator | Monday 02 June 2025 14:28:42 +0000 (0:00:01.094) 0:00:48.068 *********** 2025-06-02 14:31:46.167952 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 14:31:46.167965 | orchestrator | 2025-06-02 14:31:46.167975 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 14:31:46.167986 | orchestrator | Monday 02 June 2025 14:28:43 +0000 (0:00:01.322) 0:00:49.391 *********** 2025-06-02 14:31:46.167998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168255 | orchestrator | 2025-06-02 14:31:46.168266 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 14:31:46.168277 | orchestrator | Monday 02 June 2025 14:28:46 +0000 (0:00:02.985) 0:00:52.376 *********** 2025-06-02 14:31:46.168289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168325 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.168337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168360 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.168371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168401 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.168417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168445 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.168456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168482 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.168492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168517 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.168540 | orchestrator | 2025-06-02 14:31:46.168551 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 14:31:46.168560 | orchestrator | Monday 02 June 2025 14:28:48 +0000 (0:00:01.611) 0:00:53.987 *********** 2025-06-02 14:31:46.168577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.168608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.168649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.168666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.168686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168712 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.168722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168742 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.168763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.168785 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.168795 | orchestrator | 2025-06-02 14:31:46.168804 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 14:31:46.168820 | orchestrator | Monday 02 June 2025 14:28:50 +0000 (0:00:01.935) 0:00:55.923 *********** 2025-06-02 14:31:46.168830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.168886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.168991 | orchestrator | 2025-06-02 14:31:46.169001 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 14:31:46.169010 | orchestrator | Monday 02 June 2025 14:28:52 +0000 (0:00:02.992) 0:00:58.916 *********** 2025-06-02 14:31:46.169020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 14:31:46.169030 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 14:31:46.169040 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.169049 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 14:31:46.169059 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.169069 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 14:31:46.169078 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.169088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 14:31:46.169115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 14:31:46.169125 | orchestrator | 2025-06-02 14:31:46.169134 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 14:31:46.169144 | orchestrator | Monday 02 June 2025 14:28:55 +0000 (0:00:02.657) 0:01:01.573 *********** 2025-06-02 14:31:46.169158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169327 | orchestrator | 2025-06-02 14:31:46.169337 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 14:31:46.169346 | orchestrator | Monday 02 June 2025 14:29:09 +0000 (0:00:13.656) 0:01:15.230 *********** 2025-06-02 14:31:46.169362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.169372 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.169382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.169392 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:31:46.169401 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:31:46.169410 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:31:46.169420 | orchestrator | 2025-06-02 14:31:46.169430 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 14:31:46.169439 | orchestrator | Monday 02 June 2025 14:29:11 +0000 (0:00:02.152) 0:01:17.383 *********** 2025-06-02 14:31:46.169450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.169460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169470 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.169480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.169490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169500 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.169669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169696 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.169707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 14:31:46.169717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.169737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169770 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.169786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 14:31:46.169807 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.169817 | orchestrator | 2025-06-02 14:31:46.169826 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 14:31:46.169836 | orchestrator | Monday 02 June 2025 14:29:12 +0000 (0:00:00.992) 0:01:18.375 *********** 2025-06-02 14:31:46.169846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.169856 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.169865 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.169875 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.169884 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.169894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.169903 | orchestrator | 2025-06-02 14:31:46.169913 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 14:31:46.169922 | orchestrator | Monday 02 June 2025 14:29:13 +0000 (0:00:00.730) 0:01:19.105 *********** 2025-06-02 14:31:46.169932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.169984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 14:31:46.169994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 14:31:46.170150 | orchestrator | 2025-06-02 14:31:46.170159 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 14:31:46.170170 | orchestrator | Monday 02 June 2025 14:29:15 +0000 (0:00:02.107) 0:01:21.212 *********** 2025-06-02 14:31:46.170179 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.170189 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:31:46.170199 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:31:46.170208 | orchestrator | skipping: [testbed-node-3] 2025-06-02 14:31:46.170218 | orchestrator | skipping: [testbed-node-4] 2025-06-02 14:31:46.170227 | orchestrator | skipping: [testbed-node-5] 2025-06-02 14:31:46.170237 | orchestrator | 2025-06-02 14:31:46.170247 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 14:31:46.170256 | orchestrator | Monday 02 June 2025 14:29:15 +0000 (0:00:00.655) 0:01:21.868 *********** 2025-06-02 14:31:46.170266 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:31:46.170275 | orchestrator | 2025-06-02 14:31:46.170290 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 14:31:46.170301 | orchestrator | Monday 02 June 2025 14:29:17 +0000 (0:00:01.901) 0:01:23.770 *********** 2025-06-02 14:31:46.170335 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:31:46.170347 | orchestrator | 2025-06-02 14:31:46.170358 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 14:31:46.170370 | orchestrator | Monday 02 June 2025 14:29:19 +0000 (0:00:02.000) 0:01:25.770 *********** 2025-06-02 14:31:46.170381 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:31:46.170392 | orchestrator | 2025-06-02 14:31:46.170402 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170413 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:17.289) 0:01:43.060 *********** 2025-06-02 14:31:46.170424 | orchestrator | 2025-06-02 14:31:46.170440 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170452 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:00.143) 0:01:43.204 *********** 2025-06-02 14:31:46.170463 | orchestrator | 2025-06-02 14:31:46.170474 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170485 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:00.160) 0:01:43.365 *********** 2025-06-02 14:31:46.170496 | orchestrator | 2025-06-02 14:31:46.170507 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170517 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:00.217) 0:01:43.582 *********** 2025-06-02 14:31:46.170528 | orchestrator | 2025-06-02 14:31:46.170539 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170550 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:00.182) 0:01:43.765 *********** 2025-06-02 14:31:46.170560 | orchestrator | 2025-06-02 14:31:46.170572 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 14:31:46.170582 | orchestrator | Monday 02 June 2025 14:29:37 +0000 (0:00:00.111) 0:01:43.876 *********** 2025-06-02 14:31:46.170593 | orchestrator | 2025-06-02 14:31:46.170604 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 14:31:46.170615 | orchestrator | Monday 02 June 2025 14:29:38 +0000 (0:00:00.146) 0:01:44.023 *********** 2025-06-02 14:31:46.170626 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:31:46.170637 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:31:46.170649 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:31:46.170666 | orchestrator | 2025-06-02 14:31:46.170676 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 14:31:46.170685 | orchestrator | Monday 02 June 2025 14:30:08 +0000 (0:00:30.272) 0:02:14.295 *********** 2025-06-02 14:31:46.170695 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:31:46.170705 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:31:46.170714 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:31:46.170724 | orchestrator | 2025-06-02 14:31:46.170733 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 14:31:46.170743 | orchestrator | Monday 02 June 2025 14:30:13 +0000 (0:00:05.253) 0:02:19.548 *********** 2025-06-02 14:31:46.170753 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:31:46.170762 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:31:46.170772 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:31:46.170782 | orchestrator | 2025-06-02 14:31:46.170791 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 14:31:46.170801 | orchestrator | Monday 02 June 2025 14:31:32 +0000 (0:01:18.704) 0:03:38.253 *********** 2025-06-02 14:31:46.170810 | orchestrator | changed: [testbed-node-3] 2025-06-02 14:31:46.170820 | orchestrator | changed: [testbed-node-4] 2025-06-02 14:31:46.170830 | orchestrator | changed: [testbed-node-5] 2025-06-02 14:31:46.170839 | orchestrator | 2025-06-02 14:31:46.170849 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 14:31:46.170859 | orchestrator | Monday 02 June 2025 14:31:42 +0000 (0:00:10.037) 0:03:48.290 *********** 2025-06-02 14:31:46.170869 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:31:46.170878 | orchestrator | 2025-06-02 14:31:46.170888 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:31:46.170898 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 14:31:46.170908 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 14:31:46.170918 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 14:31:46.170928 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 14:31:46.170938 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 14:31:46.170948 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 14:31:46.170957 | orchestrator | 2025-06-02 14:31:46.170967 | orchestrator | 2025-06-02 14:31:46.170977 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:31:46.170987 | orchestrator | Monday 02 June 2025 14:31:43 +0000 (0:00:00.787) 0:03:49.077 *********** 2025-06-02 14:31:46.170996 | orchestrator | =============================================================================== 2025-06-02 14:31:46.171006 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 78.70s 2025-06-02 14:31:46.171020 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.27s 2025-06-02 14:31:46.171030 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.29s 2025-06-02 14:31:46.171039 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.66s 2025-06-02 14:31:46.171049 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.04s 2025-06-02 14:31:46.171058 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.02s 2025-06-02 14:31:46.171068 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.40s 2025-06-02 14:31:46.171083 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.25s 2025-06-02 14:31:46.171126 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.36s 2025-06-02 14:31:46.171141 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.51s 2025-06-02 14:31:46.171151 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.38s 2025-06-02 14:31:46.171160 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.99s 2025-06-02 14:31:46.171170 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.99s 2025-06-02 14:31:46.171180 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.92s 2025-06-02 14:31:46.171189 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.90s 2025-06-02 14:31:46.171199 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.81s 2025-06-02 14:31:46.171208 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.78s 2025-06-02 14:31:46.171218 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.66s 2025-06-02 14:31:46.171228 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.29s 2025-06-02 14:31:46.171237 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.15s 2025-06-02 14:31:46.171247 | orchestrator | 2025-06-02 14:31:46 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:46.171257 | orchestrator | 2025-06-02 14:31:46 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:31:46.171267 | orchestrator | 2025-06-02 14:31:46 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:46.171277 | orchestrator | 2025-06-02 14:31:46 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:46.171344 | orchestrator | 2025-06-02 14:31:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:49.211970 | orchestrator | 2025-06-02 14:31:49 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:49.212077 | orchestrator | 2025-06-02 14:31:49 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:31:49.212150 | orchestrator | 2025-06-02 14:31:49 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:49.212172 | orchestrator | 2025-06-02 14:31:49 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:49.212191 | orchestrator | 2025-06-02 14:31:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:52.237301 | orchestrator | 2025-06-02 14:31:52 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:52.237399 | orchestrator | 2025-06-02 14:31:52 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:31:52.238196 | orchestrator | 2025-06-02 14:31:52 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:52.238902 | orchestrator | 2025-06-02 14:31:52 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:52.239008 | orchestrator | 2025-06-02 14:31:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:55.268102 | orchestrator | 2025-06-02 14:31:55 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:55.268600 | orchestrator | 2025-06-02 14:31:55 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:31:55.269491 | orchestrator | 2025-06-02 14:31:55 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:55.271645 | orchestrator | 2025-06-02 14:31:55 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:55.271697 | orchestrator | 2025-06-02 14:31:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:31:58.305093 | orchestrator | 2025-06-02 14:31:58 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:31:58.305265 | orchestrator | 2025-06-02 14:31:58 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:31:58.306893 | orchestrator | 2025-06-02 14:31:58 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:31:58.307935 | orchestrator | 2025-06-02 14:31:58 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:31:58.307963 | orchestrator | 2025-06-02 14:31:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:01.349617 | orchestrator | 2025-06-02 14:32:01 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:01.352432 | orchestrator | 2025-06-02 14:32:01 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:01.355711 | orchestrator | 2025-06-02 14:32:01 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:01.356945 | orchestrator | 2025-06-02 14:32:01 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:01.357009 | orchestrator | 2025-06-02 14:32:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:04.391515 | orchestrator | 2025-06-02 14:32:04 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:04.391837 | orchestrator | 2025-06-02 14:32:04 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:04.393657 | orchestrator | 2025-06-02 14:32:04 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:04.394419 | orchestrator | 2025-06-02 14:32:04 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:04.394531 | orchestrator | 2025-06-02 14:32:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:07.428723 | orchestrator | 2025-06-02 14:32:07 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:07.429274 | orchestrator | 2025-06-02 14:32:07 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:07.430236 | orchestrator | 2025-06-02 14:32:07 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:07.430762 | orchestrator | 2025-06-02 14:32:07 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:07.430868 | orchestrator | 2025-06-02 14:32:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:10.466356 | orchestrator | 2025-06-02 14:32:10 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:10.466443 | orchestrator | 2025-06-02 14:32:10 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:10.466870 | orchestrator | 2025-06-02 14:32:10 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:10.468432 | orchestrator | 2025-06-02 14:32:10 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:10.468619 | orchestrator | 2025-06-02 14:32:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:13.509270 | orchestrator | 2025-06-02 14:32:13 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:13.511882 | orchestrator | 2025-06-02 14:32:13 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:13.513938 | orchestrator | 2025-06-02 14:32:13 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:13.515699 | orchestrator | 2025-06-02 14:32:13 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:13.515932 | orchestrator | 2025-06-02 14:32:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:16.546654 | orchestrator | 2025-06-02 14:32:16 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:16.546739 | orchestrator | 2025-06-02 14:32:16 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:16.546754 | orchestrator | 2025-06-02 14:32:16 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state STARTED 2025-06-02 14:32:16.547015 | orchestrator | 2025-06-02 14:32:16 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:16.547040 | orchestrator | 2025-06-02 14:32:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:19.576934 | orchestrator | 2025-06-02 14:32:19 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:19.577739 | orchestrator | 2025-06-02 14:32:19 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:19.577786 | orchestrator | 2025-06-02 14:32:19 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:19.578270 | orchestrator | 2025-06-02 14:32:19 | INFO  | Task 2a7dd438-ea96-4e53-a556-0333008750f9 is in state SUCCESS 2025-06-02 14:32:19.579897 | orchestrator | 2025-06-02 14:32:19.579947 | orchestrator | 2025-06-02 14:32:19.579962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 14:32:19.579974 | orchestrator | 2025-06-02 14:32:19.579999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 14:32:19.580053 | orchestrator | Monday 02 June 2025 14:30:26 +0000 (0:00:00.258) 0:00:00.259 *********** 2025-06-02 14:32:19.580125 | orchestrator | ok: [testbed-node-0] 2025-06-02 14:32:19.580340 | orchestrator | ok: [testbed-node-1] 2025-06-02 14:32:19.580351 | orchestrator | ok: [testbed-node-2] 2025-06-02 14:32:19.580362 | orchestrator | 2025-06-02 14:32:19.580373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 14:32:19.580384 | orchestrator | Monday 02 June 2025 14:30:27 +0000 (0:00:00.288) 0:00:00.547 *********** 2025-06-02 14:32:19.580395 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 14:32:19.580407 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 14:32:19.580418 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 14:32:19.580429 | orchestrator | 2025-06-02 14:32:19.580440 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 14:32:19.580453 | orchestrator | 2025-06-02 14:32:19.580466 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 14:32:19.580479 | orchestrator | Monday 02 June 2025 14:30:27 +0000 (0:00:00.403) 0:00:00.950 *********** 2025-06-02 14:32:19.580491 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:32:19.580504 | orchestrator | 2025-06-02 14:32:19.580516 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 14:32:19.580529 | orchestrator | Monday 02 June 2025 14:30:27 +0000 (0:00:00.525) 0:00:01.475 *********** 2025-06-02 14:32:19.580542 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 14:32:19.580554 | orchestrator | 2025-06-02 14:32:19.580568 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 14:32:19.580581 | orchestrator | Monday 02 June 2025 14:30:31 +0000 (0:00:03.357) 0:00:04.833 *********** 2025-06-02 14:32:19.580593 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 14:32:19.580626 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 14:32:19.580639 | orchestrator | 2025-06-02 14:32:19.580651 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 14:32:19.580664 | orchestrator | Monday 02 June 2025 14:30:37 +0000 (0:00:06.344) 0:00:11.178 *********** 2025-06-02 14:32:19.580677 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 14:32:19.580689 | orchestrator | 2025-06-02 14:32:19.580702 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 14:32:19.580714 | orchestrator | Monday 02 June 2025 14:30:40 +0000 (0:00:03.049) 0:00:14.227 *********** 2025-06-02 14:32:19.580727 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 14:32:19.580739 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 14:32:19.580751 | orchestrator | 2025-06-02 14:32:19.580763 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 14:32:19.580776 | orchestrator | Monday 02 June 2025 14:30:44 +0000 (0:00:03.539) 0:00:17.767 *********** 2025-06-02 14:32:19.580789 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 14:32:19.580801 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 14:32:19.580812 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 14:32:19.580823 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 14:32:19.580834 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 14:32:19.580845 | orchestrator | 2025-06-02 14:32:19.580856 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 14:32:19.580866 | orchestrator | Monday 02 June 2025 14:30:58 +0000 (0:00:14.761) 0:00:32.529 *********** 2025-06-02 14:32:19.580877 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 14:32:19.580888 | orchestrator | 2025-06-02 14:32:19.580899 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 14:32:19.580910 | orchestrator | Monday 02 June 2025 14:31:03 +0000 (0:00:04.428) 0:00:36.957 *********** 2025-06-02 14:32:19.580924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.580964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.580984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.580996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581086 | orchestrator | 2025-06-02 14:32:19.581097 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 14:32:19.581108 | orchestrator | Monday 02 June 2025 14:31:05 +0000 (0:00:02.157) 0:00:39.114 *********** 2025-06-02 14:32:19.581119 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 14:32:19.581130 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 14:32:19.581141 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 14:32:19.581151 | orchestrator | 2025-06-02 14:32:19.581194 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 14:32:19.581214 | orchestrator | Monday 02 June 2025 14:31:06 +0000 (0:00:00.837) 0:00:39.952 *********** 2025-06-02 14:32:19.581233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.581246 | orchestrator | 2025-06-02 14:32:19.581257 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 14:32:19.581268 | orchestrator | Monday 02 June 2025 14:31:06 +0000 (0:00:00.119) 0:00:40.072 *********** 2025-06-02 14:32:19.581279 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.581290 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.581300 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.581311 | orchestrator | 2025-06-02 14:32:19.581322 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 14:32:19.581333 | orchestrator | Monday 02 June 2025 14:31:06 +0000 (0:00:00.397) 0:00:40.470 *********** 2025-06-02 14:32:19.581344 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 14:32:19.581355 | orchestrator | 2025-06-02 14:32:19.581366 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 14:32:19.581377 | orchestrator | Monday 02 June 2025 14:31:07 +0000 (0:00:00.492) 0:00:40.963 *********** 2025-06-02 14:32:19.581388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.581415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.581435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.581447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.581545 | orchestrator | 2025-06-02 14:32:19.581556 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 14:32:19.581568 | orchestrator | Monday 02 June 2025 14:31:11 +0000 (0:00:03.592) 0:00:44.556 *********** 2025-06-02 14:32:19.581579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581614 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.581636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581678 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.581689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581730 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.581741 | orchestrator | 2025-06-02 14:32:19.581752 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 14:32:19.581763 | orchestrator | Monday 02 June 2025 14:31:11 +0000 (0:00:00.634) 0:00:45.190 *********** 2025-06-02 14:32:19.581786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581821 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.581832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581883 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.581902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.581914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.581925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582101 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.582122 | orchestrator | 2025-06-02 14:32:19.582134 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 14:32:19.582145 | orchestrator | Monday 02 June 2025 14:31:13 +0000 (0:00:01.715) 0:00:46.906 *********** 2025-06-02 14:32:19.582176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582535 | orchestrator | 2025-06-02 14:32:19.582543 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 14:32:19.582551 | orchestrator | Monday 02 June 2025 14:31:17 +0000 (0:00:03.735) 0:00:50.642 *********** 2025-06-02 14:32:19.582557 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.582564 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:32:19.582571 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:32:19.582577 | orchestrator | 2025-06-02 14:32:19.582584 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 14:32:19.582590 | orchestrator | Monday 02 June 2025 14:31:19 +0000 (0:00:02.188) 0:00:52.830 *********** 2025-06-02 14:32:19.582597 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 14:32:19.582604 | orchestrator | 2025-06-02 14:32:19.582611 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 14:32:19.582618 | orchestrator | Monday 02 June 2025 14:31:20 +0000 (0:00:01.064) 0:00:53.894 *********** 2025-06-02 14:32:19.582624 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.582631 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.582638 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.582645 | orchestrator | 2025-06-02 14:32:19.582651 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 14:32:19.582659 | orchestrator | Monday 02 June 2025 14:31:20 +0000 (0:00:00.560) 0:00:54.455 *********** 2025-06-02 14:32:19.582666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582752 | orchestrator | 2025-06-02 14:32:19.582758 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 14:32:19.582768 | orchestrator | Monday 02 June 2025 14:31:30 +0000 (0:00:09.787) 0:01:04.242 *********** 2025-06-02 14:32:19.582780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.582788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582806 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.582814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.582821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582841 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.582848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 14:32:19.582856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 14:32:19.582874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.582881 | orchestrator | 2025-06-02 14:32:19.582888 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 14:32:19.582895 | orchestrator | Monday 02 June 2025 14:31:33 +0000 (0:00:02.732) 0:01:06.975 *********** 2025-06-02 14:32:19.582902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 14:32:19.582934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 14:32:19.582986 | orchestrator | 2025-06-02 14:32:19.582993 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 14:32:19.583000 | orchestrator | Monday 02 June 2025 14:31:38 +0000 (0:00:04.804) 0:01:11.779 *********** 2025-06-02 14:32:19.583007 | orchestrator | skipping: [testbed-node-0] 2025-06-02 14:32:19.583014 | orchestrator | skipping: [testbed-node-1] 2025-06-02 14:32:19.583021 | orchestrator | skipping: [testbed-node-2] 2025-06-02 14:32:19.583028 | orchestrator | 2025-06-02 14:32:19.583035 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 14:32:19.583042 | orchestrator | Monday 02 June 2025 14:31:38 +0000 (0:00:00.418) 0:01:12.198 *********** 2025-06-02 14:32:19.583049 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583056 | orchestrator | 2025-06-02 14:32:19.583062 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 14:32:19.583069 | orchestrator | Monday 02 June 2025 14:31:40 +0000 (0:00:02.018) 0:01:14.216 *********** 2025-06-02 14:32:19.583075 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583082 | orchestrator | 2025-06-02 14:32:19.583088 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 14:32:19.583094 | orchestrator | Monday 02 June 2025 14:31:42 +0000 (0:00:02.229) 0:01:16.446 *********** 2025-06-02 14:32:19.583101 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583108 | orchestrator | 2025-06-02 14:32:19.583115 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 14:32:19.583122 | orchestrator | Monday 02 June 2025 14:31:54 +0000 (0:00:11.412) 0:01:27.859 *********** 2025-06-02 14:32:19.583128 | orchestrator | 2025-06-02 14:32:19.583179 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 14:32:19.583189 | orchestrator | Monday 02 June 2025 14:31:54 +0000 (0:00:00.072) 0:01:27.931 *********** 2025-06-02 14:32:19.583197 | orchestrator | 2025-06-02 14:32:19.583204 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 14:32:19.583211 | orchestrator | Monday 02 June 2025 14:31:54 +0000 (0:00:00.109) 0:01:28.041 *********** 2025-06-02 14:32:19.583218 | orchestrator | 2025-06-02 14:32:19.583225 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 14:32:19.583231 | orchestrator | Monday 02 June 2025 14:31:54 +0000 (0:00:00.094) 0:01:28.135 *********** 2025-06-02 14:32:19.583238 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583245 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:32:19.583252 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:32:19.583259 | orchestrator | 2025-06-02 14:32:19.583266 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 14:32:19.583273 | orchestrator | Monday 02 June 2025 14:32:01 +0000 (0:00:07.398) 0:01:35.535 *********** 2025-06-02 14:32:19.583280 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583288 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:32:19.583295 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:32:19.583302 | orchestrator | 2025-06-02 14:32:19.583309 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 14:32:19.583316 | orchestrator | Monday 02 June 2025 14:32:07 +0000 (0:00:05.841) 0:01:41.376 *********** 2025-06-02 14:32:19.583323 | orchestrator | changed: [testbed-node-1] 2025-06-02 14:32:19.583329 | orchestrator | changed: [testbed-node-2] 2025-06-02 14:32:19.583337 | orchestrator | changed: [testbed-node-0] 2025-06-02 14:32:19.583344 | orchestrator | 2025-06-02 14:32:19.583351 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 14:32:19.583358 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 14:32:19.583366 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 14:32:19.583383 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 14:32:19.583391 | orchestrator | 2025-06-02 14:32:19.583398 | orchestrator | 2025-06-02 14:32:19.583404 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 14:32:19.583412 | orchestrator | Monday 02 June 2025 14:32:17 +0000 (0:00:09.309) 0:01:50.686 *********** 2025-06-02 14:32:19.583419 | orchestrator | =============================================================================== 2025-06-02 14:32:19.583429 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.76s 2025-06-02 14:32:19.583442 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.41s 2025-06-02 14:32:19.583449 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.79s 2025-06-02 14:32:19.583456 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.31s 2025-06-02 14:32:19.583463 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.40s 2025-06-02 14:32:19.583470 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.34s 2025-06-02 14:32:19.583477 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.84s 2025-06-02 14:32:19.583484 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.80s 2025-06-02 14:32:19.583490 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.43s 2025-06-02 14:32:19.583497 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.74s 2025-06-02 14:32:19.583504 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.59s 2025-06-02 14:32:19.583511 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.54s 2025-06-02 14:32:19.583518 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.36s 2025-06-02 14:32:19.583525 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-06-02 14:32:19.583532 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.73s 2025-06-02 14:32:19.583539 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.23s 2025-06-02 14:32:19.583546 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.19s 2025-06-02 14:32:19.583554 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.16s 2025-06-02 14:32:19.583561 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.02s 2025-06-02 14:32:19.583568 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.72s 2025-06-02 14:32:19.583575 | orchestrator | 2025-06-02 14:32:19 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:19.583583 | orchestrator | 2025-06-02 14:32:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:22.615263 | orchestrator | 2025-06-02 14:32:22 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:22.615553 | orchestrator | 2025-06-02 14:32:22 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:22.615990 | orchestrator | 2025-06-02 14:32:22 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:22.616865 | orchestrator | 2025-06-02 14:32:22 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:22.616894 | orchestrator | 2025-06-02 14:32:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:25.646314 | orchestrator | 2025-06-02 14:32:25 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:25.646570 | orchestrator | 2025-06-02 14:32:25 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:25.647265 | orchestrator | 2025-06-02 14:32:25 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:25.647905 | orchestrator | 2025-06-02 14:32:25 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:25.647931 | orchestrator | 2025-06-02 14:32:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:28.677799 | orchestrator | 2025-06-02 14:32:28 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:28.677961 | orchestrator | 2025-06-02 14:32:28 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:28.679071 | orchestrator | 2025-06-02 14:32:28 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:28.679738 | orchestrator | 2025-06-02 14:32:28 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:28.679748 | orchestrator | 2025-06-02 14:32:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:31.703121 | orchestrator | 2025-06-02 14:32:31 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:31.704001 | orchestrator | 2025-06-02 14:32:31 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:31.704033 | orchestrator | 2025-06-02 14:32:31 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:31.704840 | orchestrator | 2025-06-02 14:32:31 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:31.704864 | orchestrator | 2025-06-02 14:32:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:34.739481 | orchestrator | 2025-06-02 14:32:34 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:34.739573 | orchestrator | 2025-06-02 14:32:34 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:34.740413 | orchestrator | 2025-06-02 14:32:34 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:34.741029 | orchestrator | 2025-06-02 14:32:34 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:34.741051 | orchestrator | 2025-06-02 14:32:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:37.767087 | orchestrator | 2025-06-02 14:32:37 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:37.768275 | orchestrator | 2025-06-02 14:32:37 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:37.770383 | orchestrator | 2025-06-02 14:32:37 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:37.770797 | orchestrator | 2025-06-02 14:32:37 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:37.770829 | orchestrator | 2025-06-02 14:32:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:40.813463 | orchestrator | 2025-06-02 14:32:40 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:40.813558 | orchestrator | 2025-06-02 14:32:40 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:40.813573 | orchestrator | 2025-06-02 14:32:40 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:40.814301 | orchestrator | 2025-06-02 14:32:40 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:40.814527 | orchestrator | 2025-06-02 14:32:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:43.850394 | orchestrator | 2025-06-02 14:32:43 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:43.850843 | orchestrator | 2025-06-02 14:32:43 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:43.857177 | orchestrator | 2025-06-02 14:32:43 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:43.857919 | orchestrator | 2025-06-02 14:32:43 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:43.857949 | orchestrator | 2025-06-02 14:32:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:46.900806 | orchestrator | 2025-06-02 14:32:46 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:46.902198 | orchestrator | 2025-06-02 14:32:46 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:46.902962 | orchestrator | 2025-06-02 14:32:46 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:46.904114 | orchestrator | 2025-06-02 14:32:46 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:46.904594 | orchestrator | 2025-06-02 14:32:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:49.935327 | orchestrator | 2025-06-02 14:32:49 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:49.935390 | orchestrator | 2025-06-02 14:32:49 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:49.935398 | orchestrator | 2025-06-02 14:32:49 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:49.936173 | orchestrator | 2025-06-02 14:32:49 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:49.936191 | orchestrator | 2025-06-02 14:32:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:52.967845 | orchestrator | 2025-06-02 14:32:52 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:52.968314 | orchestrator | 2025-06-02 14:32:52 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:52.969021 | orchestrator | 2025-06-02 14:32:52 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:52.969943 | orchestrator | 2025-06-02 14:32:52 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:52.969972 | orchestrator | 2025-06-02 14:32:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:55.997616 | orchestrator | 2025-06-02 14:32:55 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:55.998066 | orchestrator | 2025-06-02 14:32:55 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:55.999323 | orchestrator | 2025-06-02 14:32:55 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:56.001265 | orchestrator | 2025-06-02 14:32:55 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:56.002694 | orchestrator | 2025-06-02 14:32:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:32:59.042495 | orchestrator | 2025-06-02 14:32:59 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:32:59.042594 | orchestrator | 2025-06-02 14:32:59 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:32:59.043046 | orchestrator | 2025-06-02 14:32:59 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:32:59.044178 | orchestrator | 2025-06-02 14:32:59 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:32:59.044247 | orchestrator | 2025-06-02 14:32:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:02.081613 | orchestrator | 2025-06-02 14:33:02 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state STARTED 2025-06-02 14:33:02.083532 | orchestrator | 2025-06-02 14:33:02 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:02.083803 | orchestrator | 2025-06-02 14:33:02 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:02.086436 | orchestrator | 2025-06-02 14:33:02 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:02.086654 | orchestrator | 2025-06-02 14:33:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:05.122417 | orchestrator | 2025-06-02 14:33:05 | INFO  | Task 82335208-6129-44da-9678-9113ca3dc59c is in state SUCCESS 2025-06-02 14:33:05.122858 | orchestrator | 2025-06-02 14:33:05 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:05.123640 | orchestrator | 2025-06-02 14:33:05 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:05.124437 | orchestrator | 2025-06-02 14:33:05 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:05.125130 | orchestrator | 2025-06-02 14:33:05 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:05.125256 | orchestrator | 2025-06-02 14:33:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:08.154969 | orchestrator | 2025-06-02 14:33:08 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:08.155952 | orchestrator | 2025-06-02 14:33:08 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:08.155985 | orchestrator | 2025-06-02 14:33:08 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:08.157022 | orchestrator | 2025-06-02 14:33:08 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:08.157050 | orchestrator | 2025-06-02 14:33:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:11.188070 | orchestrator | 2025-06-02 14:33:11 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:11.188326 | orchestrator | 2025-06-02 14:33:11 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:11.189389 | orchestrator | 2025-06-02 14:33:11 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:11.191205 | orchestrator | 2025-06-02 14:33:11 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:11.191232 | orchestrator | 2025-06-02 14:33:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:14.216678 | orchestrator | 2025-06-02 14:33:14 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:14.217081 | orchestrator | 2025-06-02 14:33:14 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:14.219460 | orchestrator | 2025-06-02 14:33:14 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:14.220172 | orchestrator | 2025-06-02 14:33:14 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:14.220458 | orchestrator | 2025-06-02 14:33:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:17.252696 | orchestrator | 2025-06-02 14:33:17 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:17.253034 | orchestrator | 2025-06-02 14:33:17 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:17.253815 | orchestrator | 2025-06-02 14:33:17 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:17.255182 | orchestrator | 2025-06-02 14:33:17 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:17.255217 | orchestrator | 2025-06-02 14:33:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:20.310388 | orchestrator | 2025-06-02 14:33:20 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:20.310545 | orchestrator | 2025-06-02 14:33:20 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:20.310645 | orchestrator | 2025-06-02 14:33:20 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:20.311763 | orchestrator | 2025-06-02 14:33:20 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:20.311851 | orchestrator | 2025-06-02 14:33:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:23.354871 | orchestrator | 2025-06-02 14:33:23 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:23.355378 | orchestrator | 2025-06-02 14:33:23 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:23.356141 | orchestrator | 2025-06-02 14:33:23 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:23.356997 | orchestrator | 2025-06-02 14:33:23 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:23.357024 | orchestrator | 2025-06-02 14:33:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:26.399100 | orchestrator | 2025-06-02 14:33:26 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:26.400796 | orchestrator | 2025-06-02 14:33:26 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:26.402305 | orchestrator | 2025-06-02 14:33:26 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:26.403864 | orchestrator | 2025-06-02 14:33:26 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:26.403887 | orchestrator | 2025-06-02 14:33:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:29.445358 | orchestrator | 2025-06-02 14:33:29 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:29.447112 | orchestrator | 2025-06-02 14:33:29 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:29.448254 | orchestrator | 2025-06-02 14:33:29 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:29.449809 | orchestrator | 2025-06-02 14:33:29 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:29.450126 | orchestrator | 2025-06-02 14:33:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:32.491470 | orchestrator | 2025-06-02 14:33:32 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:32.496013 | orchestrator | 2025-06-02 14:33:32 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:32.496067 | orchestrator | 2025-06-02 14:33:32 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:32.496080 | orchestrator | 2025-06-02 14:33:32 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:32.496091 | orchestrator | 2025-06-02 14:33:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:35.538651 | orchestrator | 2025-06-02 14:33:35 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:35.539186 | orchestrator | 2025-06-02 14:33:35 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:35.541018 | orchestrator | 2025-06-02 14:33:35 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:35.542246 | orchestrator | 2025-06-02 14:33:35 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:35.542274 | orchestrator | 2025-06-02 14:33:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:38.582823 | orchestrator | 2025-06-02 14:33:38 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:38.585250 | orchestrator | 2025-06-02 14:33:38 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:38.587434 | orchestrator | 2025-06-02 14:33:38 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:38.588929 | orchestrator | 2025-06-02 14:33:38 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:38.588953 | orchestrator | 2025-06-02 14:33:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:41.629493 | orchestrator | 2025-06-02 14:33:41 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:41.630182 | orchestrator | 2025-06-02 14:33:41 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:41.633218 | orchestrator | 2025-06-02 14:33:41 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:41.633775 | orchestrator | 2025-06-02 14:33:41 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:41.634540 | orchestrator | 2025-06-02 14:33:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:44.662769 | orchestrator | 2025-06-02 14:33:44 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:44.666976 | orchestrator | 2025-06-02 14:33:44 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:44.667015 | orchestrator | 2025-06-02 14:33:44 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:44.667027 | orchestrator | 2025-06-02 14:33:44 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:44.667039 | orchestrator | 2025-06-02 14:33:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:47.698864 | orchestrator | 2025-06-02 14:33:47 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:47.699520 | orchestrator | 2025-06-02 14:33:47 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:47.700174 | orchestrator | 2025-06-02 14:33:47 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:47.701998 | orchestrator | 2025-06-02 14:33:47 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:47.702082 | orchestrator | 2025-06-02 14:33:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:50.737092 | orchestrator | 2025-06-02 14:33:50 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:50.741966 | orchestrator | 2025-06-02 14:33:50 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:50.742054 | orchestrator | 2025-06-02 14:33:50 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:50.744154 | orchestrator | 2025-06-02 14:33:50 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:50.744362 | orchestrator | 2025-06-02 14:33:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:53.785153 | orchestrator | 2025-06-02 14:33:53 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:53.786573 | orchestrator | 2025-06-02 14:33:53 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:53.788356 | orchestrator | 2025-06-02 14:33:53 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:53.791541 | orchestrator | 2025-06-02 14:33:53 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:53.791569 | orchestrator | 2025-06-02 14:33:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:56.833215 | orchestrator | 2025-06-02 14:33:56 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:56.834825 | orchestrator | 2025-06-02 14:33:56 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:56.836252 | orchestrator | 2025-06-02 14:33:56 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:56.837646 | orchestrator | 2025-06-02 14:33:56 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:56.837826 | orchestrator | 2025-06-02 14:33:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:33:59.880030 | orchestrator | 2025-06-02 14:33:59 | INFO  | Task 67dba702-04d7-4984-b175-389635e9f02d is in state STARTED 2025-06-02 14:33:59.881933 | orchestrator | 2025-06-02 14:33:59 | INFO  | Task 51685601-e5c0-4af9-bc47-000f35e8feb0 is in state STARTED 2025-06-02 14:33:59.883844 | orchestrator | 2025-06-02 14:33:59 | INFO  | Task 4cf0b054-1cb1-47eb-970f-84a56d19a5ce is in state STARTED 2025-06-02 14:33:59.885514 | orchestrator | 2025-06-02 14:33:59 | INFO  | Task 0e71d34c-6cb0-4028-b855-70caa45bd0e2 is in state STARTED 2025-06-02 14:33:59.885753 | orchestrator | 2025-06-02 14:33:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 14:34:00.368010 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 14:34:00.370173 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 14:34:01.115605 | 2025-06-02 14:34:01.115806 | PLAY [Post output play] 2025-06-02 14:34:01.133453 | 2025-06-02 14:34:01.133601 | LOOP [stage-output : Register sources] 2025-06-02 14:34:01.204664 | 2025-06-02 14:34:01.205045 | TASK [stage-output : Check sudo] 2025-06-02 14:34:02.082475 | orchestrator | sudo: a password is required 2025-06-02 14:34:02.250598 | orchestrator | ok: Runtime: 0:00:00.015563 2025-06-02 14:34:02.258571 | 2025-06-02 14:34:02.258741 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 14:34:02.298689 | 2025-06-02 14:34:02.299042 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 14:34:02.364437 | orchestrator | ok 2025-06-02 14:34:02.374165 | 2025-06-02 14:34:02.374325 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 14:34:02.833671 | orchestrator | ok: "docs" 2025-06-02 14:34:02.833995 | 2025-06-02 14:34:03.112867 | orchestrator | ok: "artifacts" 2025-06-02 14:34:03.364830 | orchestrator | ok: "logs" 2025-06-02 14:34:03.379645 | 2025-06-02 14:34:03.379837 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 14:34:03.417621 | 2025-06-02 14:34:03.417949 | TASK [stage-output : Make all log files readable] 2025-06-02 14:34:03.725271 | orchestrator | ok 2025-06-02 14:34:03.734592 | 2025-06-02 14:34:03.734795 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 14:34:03.770622 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:03.786233 | 2025-06-02 14:34:03.786407 | TASK [stage-output : Discover log files for compression] 2025-06-02 14:34:03.811099 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:03.824424 | 2025-06-02 14:34:03.824582 | LOOP [stage-output : Archive everything from logs] 2025-06-02 14:34:03.873658 | 2025-06-02 14:34:03.873901 | PLAY [Post cleanup play] 2025-06-02 14:34:03.883968 | 2025-06-02 14:34:03.884086 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 14:34:03.962943 | orchestrator | ok 2025-06-02 14:34:03.980798 | 2025-06-02 14:34:03.980957 | TASK [Set cloud fact (local deployment)] 2025-06-02 14:34:04.026410 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:04.044318 | 2025-06-02 14:34:04.044498 | TASK [Clean the cloud environment] 2025-06-02 14:34:04.629666 | orchestrator | 2025-06-02 14:34:04 - clean up servers 2025-06-02 14:34:05.373197 | orchestrator | 2025-06-02 14:34:05 - testbed-manager 2025-06-02 14:34:05.465344 | orchestrator | 2025-06-02 14:34:05 - testbed-node-0 2025-06-02 14:34:05.555488 | orchestrator | 2025-06-02 14:34:05 - testbed-node-1 2025-06-02 14:34:05.642744 | orchestrator | 2025-06-02 14:34:05 - testbed-node-3 2025-06-02 14:34:05.742001 | orchestrator | 2025-06-02 14:34:05 - testbed-node-5 2025-06-02 14:34:05.842709 | orchestrator | 2025-06-02 14:34:05 - testbed-node-4 2025-06-02 14:34:05.931756 | orchestrator | 2025-06-02 14:34:05 - testbed-node-2 2025-06-02 14:34:06.015264 | orchestrator | 2025-06-02 14:34:06 - clean up keypairs 2025-06-02 14:34:06.032685 | orchestrator | 2025-06-02 14:34:06 - testbed 2025-06-02 14:34:06.059673 | orchestrator | 2025-06-02 14:34:06 - wait for servers to be gone 2025-06-02 14:34:14.781915 | orchestrator | 2025-06-02 14:34:14 - clean up ports 2025-06-02 14:34:14.968882 | orchestrator | 2025-06-02 14:34:14 - 18b6975f-aa8f-4d8e-961b-05124e7aa01e 2025-06-02 14:34:15.250149 | orchestrator | 2025-06-02 14:34:15 - 27d52891-816a-4436-9d38-6c75f2ca3ca7 2025-06-02 14:34:15.507557 | orchestrator | 2025-06-02 14:34:15 - 9d539432-5ee1-4dc5-844c-57c75767245f 2025-06-02 14:34:15.726376 | orchestrator | 2025-06-02 14:34:15 - a2d9d36e-8956-422c-8ea8-cae0175fca58 2025-06-02 14:34:15.974397 | orchestrator | 2025-06-02 14:34:15 - c9afcc7f-6464-480d-b0e2-a4d796f6af1c 2025-06-02 14:34:16.215427 | orchestrator | 2025-06-02 14:34:16 - ce05e247-cd50-4647-abdd-fd7238bd2806 2025-06-02 14:34:16.483708 | orchestrator | 2025-06-02 14:34:16 - d9f2a123-a8c5-4b3a-b6e6-cb41e865dd5e 2025-06-02 14:34:16.886562 | orchestrator | 2025-06-02 14:34:16 - clean up volumes 2025-06-02 14:34:17.003217 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-5-node-base 2025-06-02 14:34:17.039451 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-1-node-base 2025-06-02 14:34:17.077319 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-4-node-base 2025-06-02 14:34:17.119225 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-2-node-base 2025-06-02 14:34:17.160089 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-3-node-base 2025-06-02 14:34:17.198371 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-0-node-base 2025-06-02 14:34:17.237879 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-manager-base 2025-06-02 14:34:17.280393 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-0-node-3 2025-06-02 14:34:17.318506 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-2-node-5 2025-06-02 14:34:17.357629 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-3-node-3 2025-06-02 14:34:17.396281 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-6-node-3 2025-06-02 14:34:17.592384 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-7-node-4 2025-06-02 14:34:17.636130 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-1-node-4 2025-06-02 14:34:17.679864 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-5-node-5 2025-06-02 14:34:17.724094 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-4-node-4 2025-06-02 14:34:17.769104 | orchestrator | 2025-06-02 14:34:17 - testbed-volume-8-node-5 2025-06-02 14:34:17.807435 | orchestrator | 2025-06-02 14:34:17 - disconnect routers 2025-06-02 14:34:17.930952 | orchestrator | 2025-06-02 14:34:17 - testbed 2025-06-02 14:34:18.859494 | orchestrator | 2025-06-02 14:34:18 - clean up subnets 2025-06-02 14:34:18.900245 | orchestrator | 2025-06-02 14:34:18 - subnet-testbed-management 2025-06-02 14:34:19.098828 | orchestrator | 2025-06-02 14:34:19 - clean up networks 2025-06-02 14:34:19.256359 | orchestrator | 2025-06-02 14:34:19 - net-testbed-management 2025-06-02 14:34:19.536514 | orchestrator | 2025-06-02 14:34:19 - clean up security groups 2025-06-02 14:34:19.577199 | orchestrator | 2025-06-02 14:34:19 - testbed-node 2025-06-02 14:34:20.149448 | orchestrator | 2025-06-02 14:34:20 - testbed-management 2025-06-02 14:34:20.261553 | orchestrator | 2025-06-02 14:34:20 - clean up floating ips 2025-06-02 14:34:20.296859 | orchestrator | 2025-06-02 14:34:20 - 81.163.192.129 2025-06-02 14:34:20.676908 | orchestrator | 2025-06-02 14:34:20 - clean up routers 2025-06-02 14:34:20.736527 | orchestrator | 2025-06-02 14:34:20 - testbed 2025-06-02 14:34:21.602771 | orchestrator | ok: Runtime: 0:00:17.177079 2025-06-02 14:34:21.606170 | 2025-06-02 14:34:21.606275 | PLAY RECAP 2025-06-02 14:34:21.606347 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 14:34:21.606378 | 2025-06-02 14:34:21.739054 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 14:34:21.740067 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 14:34:22.523771 | 2025-06-02 14:34:22.523998 | PLAY [Cleanup play] 2025-06-02 14:34:22.541416 | 2025-06-02 14:34:22.541590 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 14:34:22.599391 | orchestrator | ok 2025-06-02 14:34:22.608687 | 2025-06-02 14:34:22.608907 | TASK [Set cloud fact (local deployment)] 2025-06-02 14:34:22.635188 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:22.651527 | 2025-06-02 14:34:22.651688 | TASK [Clean the cloud environment] 2025-06-02 14:34:23.806312 | orchestrator | 2025-06-02 14:34:23 - clean up servers 2025-06-02 14:34:24.269520 | orchestrator | 2025-06-02 14:34:24 - clean up keypairs 2025-06-02 14:34:24.281724 | orchestrator | 2025-06-02 14:34:24 - wait for servers to be gone 2025-06-02 14:34:24.319390 | orchestrator | 2025-06-02 14:34:24 - clean up ports 2025-06-02 14:34:24.396258 | orchestrator | 2025-06-02 14:34:24 - clean up volumes 2025-06-02 14:34:24.461773 | orchestrator | 2025-06-02 14:34:24 - disconnect routers 2025-06-02 14:34:24.487616 | orchestrator | 2025-06-02 14:34:24 - clean up subnets 2025-06-02 14:34:24.509416 | orchestrator | 2025-06-02 14:34:24 - clean up networks 2025-06-02 14:34:24.662451 | orchestrator | 2025-06-02 14:34:24 - clean up security groups 2025-06-02 14:34:24.693974 | orchestrator | 2025-06-02 14:34:24 - clean up floating ips 2025-06-02 14:34:24.718048 | orchestrator | 2025-06-02 14:34:24 - clean up routers 2025-06-02 14:34:25.193110 | orchestrator | ok: Runtime: 0:00:01.281013 2025-06-02 14:34:25.197268 | 2025-06-02 14:34:25.197442 | PLAY RECAP 2025-06-02 14:34:25.197585 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 14:34:25.197652 | 2025-06-02 14:34:25.337660 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 14:34:25.338691 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 14:34:26.110059 | 2025-06-02 14:34:26.110236 | PLAY [Base post-fetch] 2025-06-02 14:34:26.126245 | 2025-06-02 14:34:26.126399 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 14:34:26.192802 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:26.205051 | 2025-06-02 14:34:26.205279 | TASK [fetch-output : Set log path for single node] 2025-06-02 14:34:26.264781 | orchestrator | ok 2025-06-02 14:34:26.276294 | 2025-06-02 14:34:26.276468 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 14:34:26.813409 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/logs" 2025-06-02 14:34:27.103444 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/artifacts" 2025-06-02 14:34:27.392353 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c910ada1ace424a8673b485a079b076/work/docs" 2025-06-02 14:34:27.421421 | 2025-06-02 14:34:27.421678 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 14:34:28.367910 | orchestrator | changed: .d..t...... ./ 2025-06-02 14:34:28.368194 | orchestrator | changed: All items complete 2025-06-02 14:34:28.368236 | 2025-06-02 14:34:29.106686 | orchestrator | changed: .d..t...... ./ 2025-06-02 14:34:29.844435 | orchestrator | changed: .d..t...... ./ 2025-06-02 14:34:29.873710 | 2025-06-02 14:34:29.873908 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 14:34:29.927324 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:29.932583 | orchestrator | skipping: Conditional result was False 2025-06-02 14:34:29.955941 | 2025-06-02 14:34:29.956073 | PLAY RECAP 2025-06-02 14:34:29.956155 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 14:34:29.956197 | 2025-06-02 14:34:30.100034 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 14:34:30.103468 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 14:34:30.906787 | 2025-06-02 14:34:30.906993 | PLAY [Base post] 2025-06-02 14:34:30.921871 | 2025-06-02 14:34:30.922028 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 14:34:31.949833 | orchestrator | changed 2025-06-02 14:34:31.967489 | 2025-06-02 14:34:31.967730 | PLAY RECAP 2025-06-02 14:34:31.967890 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 14:34:31.968032 | 2025-06-02 14:34:32.165776 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 14:34:32.168496 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 14:34:32.963397 | 2025-06-02 14:34:32.963577 | PLAY [Base post-logs] 2025-06-02 14:34:32.974651 | 2025-06-02 14:34:32.974875 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 14:34:33.450026 | localhost | changed 2025-06-02 14:34:33.463828 | 2025-06-02 14:34:33.464180 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 14:34:33.502715 | localhost | ok 2025-06-02 14:34:33.509167 | 2025-06-02 14:34:33.509352 | TASK [Set zuul-log-path fact] 2025-06-02 14:34:33.540208 | localhost | ok 2025-06-02 14:34:33.557590 | 2025-06-02 14:34:33.557797 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 14:34:33.596280 | localhost | ok 2025-06-02 14:34:33.601481 | 2025-06-02 14:34:33.601631 | TASK [upload-logs : Create log directories] 2025-06-02 14:34:34.190810 | localhost | changed 2025-06-02 14:34:34.194417 | 2025-06-02 14:34:34.194533 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 14:34:34.708303 | localhost -> localhost | ok: Runtime: 0:00:00.007074 2025-06-02 14:34:34.719447 | 2025-06-02 14:34:34.719683 | TASK [upload-logs : Upload logs to log server] 2025-06-02 14:34:35.330091 | localhost | Output suppressed because no_log was given 2025-06-02 14:34:35.334048 | 2025-06-02 14:34:35.334262 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 14:34:35.395260 | localhost | skipping: Conditional result was False 2025-06-02 14:34:35.401872 | localhost | skipping: Conditional result was False 2025-06-02 14:34:35.410106 | 2025-06-02 14:34:35.410351 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 14:34:35.464411 | localhost | skipping: Conditional result was False 2025-06-02 14:34:35.465219 | 2025-06-02 14:34:35.467535 | localhost | skipping: Conditional result was False 2025-06-02 14:34:35.477920 | 2025-06-02 14:34:35.478184 | LOOP [upload-logs : Upload console log and json output]