2025-03-10 23:34:40.903640 | Job console starting... 2025-03-10 23:34:40.916920 | Updating repositories 2025-03-10 23:34:41.013779 | Preparing job workspace 2025-03-10 23:34:42.490389 | Running Ansible setup... 2025-03-10 23:34:47.464075 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-10 23:34:48.156957 | 2025-03-10 23:34:48.157151 | PLAY [Base pre] 2025-03-10 23:34:48.188169 | 2025-03-10 23:34:48.188302 | TASK [Setup log path fact] 2025-03-10 23:34:48.221997 | orchestrator | ok 2025-03-10 23:34:48.244580 | 2025-03-10 23:34:48.244703 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-10 23:34:48.289861 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:48.305959 | 2025-03-10 23:34:48.306149 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-10 23:34:48.367883 | orchestrator | ok 2025-03-10 23:34:48.380151 | 2025-03-10 23:34:48.380288 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-10 23:34:48.436834 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:48.452852 | 2025-03-10 23:34:48.452986 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-10 23:34:48.488127 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:48.503582 | 2025-03-10 23:34:48.503722 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-10 23:34:48.539312 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:48.554829 | 2025-03-10 23:34:48.554974 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-10 23:34:48.590467 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:48.616366 | 2025-03-10 23:34:48.616578 | TASK [emit-job-header : Print job information] 2025-03-10 23:34:48.693730 | # Job Information 2025-03-10 23:34:48.693934 | Ansible Version: 2.15.3 2025-03-10 23:34:48.693978 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-03-10 23:34:48.694018 | Pipeline: post 2025-03-10 23:34:48.694087 | Executor: 7d211f194f6a 2025-03-10 23:34:48.694119 | Triggered by: https://github.com/osism/testbed/commit/af7b5875124ec115185ac1bea08af6619a635d52 2025-03-10 23:34:48.694146 | Event ID: 1dbe7dde-fdec-11ef-8229-d4ae7b0c4880 2025-03-10 23:34:48.704364 | 2025-03-10 23:34:48.704518 | LOOP [emit-job-header : Print node information] 2025-03-10 23:34:48.873951 | orchestrator | ok: 2025-03-10 23:34:48.874217 | orchestrator | # Node Information 2025-03-10 23:34:48.874270 | orchestrator | Inventory Hostname: orchestrator 2025-03-10 23:34:48.874308 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-10 23:34:48.874343 | orchestrator | Username: zuul-testbed05 2025-03-10 23:34:48.874377 | orchestrator | Distro: Debian 12.9 2025-03-10 23:34:48.874409 | orchestrator | Provider: static-testbed 2025-03-10 23:34:48.874441 | orchestrator | Label: testbed-orchestrator 2025-03-10 23:34:48.874472 | orchestrator | Product Name: OpenStack Nova 2025-03-10 23:34:48.874504 | orchestrator | Interface IP: 81.163.193.140 2025-03-10 23:34:48.900930 | 2025-03-10 23:34:48.901104 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-10 23:34:49.392151 | orchestrator -> localhost | changed 2025-03-10 23:34:49.411450 | 2025-03-10 23:34:49.411615 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-10 23:34:50.491698 | orchestrator -> localhost | changed 2025-03-10 23:34:50.514089 | 2025-03-10 23:34:50.514213 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-10 23:34:50.811862 | orchestrator -> localhost | ok 2025-03-10 23:34:50.829503 | 2025-03-10 23:34:50.829667 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-10 23:34:50.878464 | orchestrator | ok 2025-03-10 23:34:50.898312 | orchestrator | included: /var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-10 23:34:50.907382 | 2025-03-10 23:34:50.907481 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-10 23:34:51.549656 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-10 23:34:51.550132 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/a8d42800f1c548199d1fbe7f4c48adb3_id_rsa 2025-03-10 23:34:51.550236 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/a8d42800f1c548199d1fbe7f4c48adb3_id_rsa.pub 2025-03-10 23:34:51.550339 | orchestrator -> localhost | The key fingerprint is: 2025-03-10 23:34:51.550446 | orchestrator -> localhost | SHA256:L9A2iZxdGq0PF+4TcqZ3wPSLzNL53qhigPuQZRB3eWw zuul-build-sshkey 2025-03-10 23:34:51.550555 | orchestrator -> localhost | The key's randomart image is: 2025-03-10 23:34:51.550671 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-10 23:34:51.550778 | orchestrator -> localhost | | . . .o | 2025-03-10 23:34:51.550872 | orchestrator -> localhost | | o ...E | 2025-03-10 23:34:51.550934 | orchestrator -> localhost | | . .o= | 2025-03-10 23:34:51.550991 | orchestrator -> localhost | | o = X o | 2025-03-10 23:34:51.551083 | orchestrator -> localhost | | .B S O . | 2025-03-10 23:34:51.551144 | orchestrator -> localhost | | .+.o / = . | 2025-03-10 23:34:51.551200 | orchestrator -> localhost | | o. .+ % o | 2025-03-10 23:34:51.551283 | orchestrator -> localhost | | .. o+ + o | 2025-03-10 23:34:51.551387 | orchestrator -> localhost | | ... ..o+ . | 2025-03-10 23:34:51.551486 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-10 23:34:51.551699 | orchestrator -> localhost | ok: Runtime: 0:00:00.130213 2025-03-10 23:34:51.570848 | 2025-03-10 23:34:51.570996 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-10 23:34:51.618471 | orchestrator | ok 2025-03-10 23:34:51.631530 | orchestrator | included: /var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-10 23:34:51.642641 | 2025-03-10 23:34:51.642744 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-10 23:34:51.678380 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:51.689717 | 2025-03-10 23:34:51.689825 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-10 23:34:52.241634 | orchestrator | changed 2025-03-10 23:34:52.252210 | 2025-03-10 23:34:52.252330 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-10 23:34:52.509021 | orchestrator | ok 2025-03-10 23:34:52.519793 | 2025-03-10 23:34:52.519920 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-10 23:34:52.939484 | orchestrator | ok 2025-03-10 23:34:52.947077 | 2025-03-10 23:34:52.947186 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-10 23:34:53.333822 | orchestrator | ok 2025-03-10 23:34:53.344156 | 2025-03-10 23:34:53.344279 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-10 23:34:53.378993 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:53.396234 | 2025-03-10 23:34:53.396382 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-10 23:34:53.820785 | orchestrator -> localhost | changed 2025-03-10 23:34:53.847017 | 2025-03-10 23:34:53.847195 | TASK [add-build-sshkey : Add back temp key] 2025-03-10 23:34:54.209855 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/a8d42800f1c548199d1fbe7f4c48adb3_id_rsa (zuul-build-sshkey) 2025-03-10 23:34:54.210265 | orchestrator -> localhost | ok: Runtime: 0:00:00.017214 2025-03-10 23:34:54.224725 | 2025-03-10 23:34:54.224865 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-10 23:34:54.615363 | orchestrator | ok 2025-03-10 23:34:54.625579 | 2025-03-10 23:34:54.625733 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-10 23:34:54.660650 | orchestrator | skipping: Conditional result was False 2025-03-10 23:34:54.675844 | 2025-03-10 23:34:54.675947 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-10 23:34:55.086091 | orchestrator | ok 2025-03-10 23:34:55.105262 | 2025-03-10 23:34:55.105385 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-10 23:34:55.153031 | orchestrator | ok 2025-03-10 23:34:55.162580 | 2025-03-10 23:34:55.162695 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-10 23:34:55.465773 | orchestrator -> localhost | ok 2025-03-10 23:34:55.475363 | 2025-03-10 23:34:55.475486 | TASK [validate-host : Collect information about the host] 2025-03-10 23:34:56.702387 | orchestrator | ok 2025-03-10 23:34:56.720032 | 2025-03-10 23:34:56.720206 | TASK [validate-host : Sanitize hostname] 2025-03-10 23:34:56.801443 | orchestrator | ok 2025-03-10 23:34:56.810759 | 2025-03-10 23:34:56.810877 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-10 23:34:57.375909 | orchestrator -> localhost | changed 2025-03-10 23:34:57.392221 | 2025-03-10 23:34:57.392376 | TASK [validate-host : Collect information about zuul worker] 2025-03-10 23:34:57.901074 | orchestrator | ok 2025-03-10 23:34:57.911816 | 2025-03-10 23:34:57.911951 | TASK [validate-host : Write out all zuul information for each host] 2025-03-10 23:34:58.463033 | orchestrator -> localhost | changed 2025-03-10 23:34:58.488490 | 2025-03-10 23:34:58.488627 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-10 23:34:58.776765 | orchestrator | ok 2025-03-10 23:34:58.787322 | 2025-03-10 23:34:58.787451 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-10 23:35:16.778540 | orchestrator | changed: 2025-03-10 23:35:16.778680 | orchestrator | .d..t...... src/ 2025-03-10 23:35:16.778712 | orchestrator | .d..t...... src/github.com/ 2025-03-10 23:35:16.778735 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-10 23:35:16.778756 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-10 23:35:16.778776 | orchestrator | RedHat.yml 2025-03-10 23:35:16.793189 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-10 23:35:16.793206 | orchestrator | RedHat.yml 2025-03-10 23:35:16.793258 | orchestrator | = 1.53.0"... 2025-03-10 23:35:28.820071 | orchestrator | 23:35:28.819 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-03-10 23:35:29.797138 | orchestrator | 23:35:29.797 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-10 23:35:31.530502 | orchestrator | 23:35:31.530 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-10 23:35:32.830182 | orchestrator | 23:35:32.830 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-10 23:35:33.941843 | orchestrator | 23:35:33.941 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-10 23:35:35.077580 | orchestrator | 23:35:35.077 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-10 23:35:36.137107 | orchestrator | 23:35:36.136 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-10 23:35:36.137310 | orchestrator | 23:35:36.137 STDOUT terraform: Providers are signed by their developers. 2025-03-10 23:35:36.137547 | orchestrator | 23:35:36.137 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-10 23:35:36.137559 | orchestrator | 23:35:36.137 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-10 23:35:36.137567 | orchestrator | 23:35:36.137 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-10 23:35:36.137969 | orchestrator | 23:35:36.137 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-10 23:35:36.137984 | orchestrator | 23:35:36.137 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-10 23:35:36.137989 | orchestrator | 23:35:36.137 STDOUT terraform: you run "tofu init" in the future. 2025-03-10 23:35:36.137997 | orchestrator | 23:35:36.137 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-10 23:35:36.138310 | orchestrator | 23:35:36.138 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-10 23:35:36.274163 | orchestrator | 23:35:36.138 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-10 23:35:36.274293 | orchestrator | 23:35:36.138 STDOUT terraform: should now work. 2025-03-10 23:35:36.274318 | orchestrator | 23:35:36.138 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-10 23:35:36.274331 | orchestrator | 23:35:36.138 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-10 23:35:36.274344 | orchestrator | 23:35:36.138 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-10 23:35:36.274414 | orchestrator | 23:35:36.273 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-03-10 23:35:36.423712 | orchestrator | 23:35:36.423 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-10 23:35:36.423804 | orchestrator | 23:35:36.423 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-10 23:35:36.423922 | orchestrator | 23:35:36.423 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-10 23:35:36.423960 | orchestrator | 23:35:36.423 STDOUT terraform: for this configuration. 2025-03-10 23:35:36.572763 | orchestrator | 23:35:36.572 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-03-10 23:35:36.657895 | orchestrator | 23:35:36.657 STDOUT terraform: ci.auto.tfvars 2025-03-10 23:35:36.802800 | orchestrator | 23:35:36.802 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-03-10 23:35:37.673008 | orchestrator | 23:35:37.672 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-10 23:35:38.179288 | orchestrator | 23:35:38.178 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-10 23:35:38.381846 | orchestrator | 23:35:38.381 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-10 23:35:38.381942 | orchestrator | 23:35:38.381 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-10 23:35:38.381982 | orchestrator | 23:35:38.381 STDOUT terraform:  + create 2025-03-10 23:35:38.382079 | orchestrator | 23:35:38.381 STDOUT terraform:  <= read (data resources) 2025-03-10 23:35:38.382155 | orchestrator | 23:35:38.382 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-10 23:35:38.382307 | orchestrator | 23:35:38.382 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-10 23:35:38.382526 | orchestrator | 23:35:38.382 STDOUT terraform:  # (config refers to values not yet known) 2025-03-10 23:35:38.382614 | orchestrator | 23:35:38.382 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-10 23:35:38.382694 | orchestrator | 23:35:38.382 STDOUT terraform:  + checksum = (known after apply) 2025-03-10 23:35:38.382771 | orchestrator | 23:35:38.382 STDOUT terraform:  + created_at = (known after apply) 2025-03-10 23:35:38.382850 | orchestrator | 23:35:38.382 STDOUT terraform:  + file = (known after apply) 2025-03-10 23:35:38.382929 | orchestrator | 23:35:38.382 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.383010 | orchestrator | 23:35:38.382 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.383090 | orchestrator | 23:35:38.383 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-10 23:35:38.383167 | orchestrator | 23:35:38.383 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-10 23:35:38.383222 | orchestrator | 23:35:38.383 STDOUT terraform:  + most_recent = true 2025-03-10 23:35:38.383299 | orchestrator | 23:35:38.383 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.383384 | orchestrator | 23:35:38.383 STDOUT terraform:  + protected = (known after apply) 2025-03-10 23:35:38.383477 | orchestrator | 23:35:38.383 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.383555 | orchestrator | 23:35:38.383 STDOUT terraform:  + schema = (known after apply) 2025-03-10 23:35:38.383634 | orchestrator | 23:35:38.383 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-10 23:35:38.383714 | orchestrator | 23:35:38.383 STDOUT terraform:  + tags = (known after apply) 2025-03-10 23:35:38.383798 | orchestrator | 23:35:38.383 STDOUT terraform:  + updated_at = (known after apply) 2025-03-10 23:35:38.383830 | orchestrator | 23:35:38.383 STDOUT terraform:  } 2025-03-10 23:35:38.383967 | orchestrator | 23:35:38.383 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-10 23:35:38.384043 | orchestrator | 23:35:38.383 STDOUT terraform:  # (config refers to values not yet known) 2025-03-10 23:35:38.384141 | orchestrator | 23:35:38.384 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-10 23:35:38.384220 | orchestrator | 23:35:38.384 STDOUT terraform:  + checksum = (known after apply) 2025-03-10 23:35:38.384298 | orchestrator | 23:35:38.384 STDOUT terraform:  + created_at = (known after apply) 2025-03-10 23:35:38.384384 | orchestrator | 23:35:38.384 STDOUT terraform:  + file = (known after apply) 2025-03-10 23:35:38.384464 | orchestrator | 23:35:38.384 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.384541 | orchestrator | 23:35:38.384 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.384618 | orchestrator | 23:35:38.384 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-10 23:35:38.384696 | orchestrator | 23:35:38.384 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-10 23:35:38.384752 | orchestrator | 23:35:38.384 STDOUT terraform:  + most_recent = true 2025-03-10 23:35:38.384831 | orchestrator | 23:35:38.384 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.384907 | orchestrator | 23:35:38.384 STDOUT terraform:  + protected = (known after apply) 2025-03-10 23:35:38.384984 | orchestrator | 23:35:38.384 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.385086 | orchestrator | 23:35:38.384 STDOUT terraform:  + schema = (known after apply) 2025-03-10 23:35:38.385170 | orchestrator | 23:35:38.385 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-10 23:35:38.385248 | orchestrator | 23:35:38.385 STDOUT terraform:  + tags = (known after apply) 2025-03-10 23:35:38.385329 | orchestrator | 23:35:38.385 STDOUT terraform:  + updated_at = (known after apply) 2025-03-10 23:35:38.385366 | orchestrator | 23:35:38.385 STDOUT terraform:  } 2025-03-10 23:35:38.385462 | orchestrator | 23:35:38.385 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-10 23:35:38.385538 | orchestrator | 23:35:38.385 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-10 23:35:38.385638 | orchestrator | 23:35:38.385 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:35:38.385733 | orchestrator | 23:35:38.385 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:35:38.385831 | orchestrator | 23:35:38.385 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:35:38.385930 | orchestrator | 23:35:38.385 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:35:38.386054 | orchestrator | 23:35:38.385 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:35:38.386147 | orchestrator | 23:35:38.386 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:35:38.386243 | orchestrator | 23:35:38.386 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:35:38.386308 | orchestrator | 23:35:38.386 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:35:38.386371 | orchestrator | 23:35:38.386 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:35:38.386513 | orchestrator | 23:35:38.386 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-10 23:35:38.386614 | orchestrator | 23:35:38.386 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.386650 | orchestrator | 23:35:38.386 STDOUT terraform:  } 2025-03-10 23:35:38.386725 | orchestrator | 23:35:38.386 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-10 23:35:38.386785 | orchestrator | 23:35:38.386 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-10 23:35:38.386870 | orchestrator | 23:35:38.386 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:35:38.386955 | orchestrator | 23:35:38.386 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:35:38.387037 | orchestrator | 23:35:38.386 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:35:38.387122 | orchestrator | 23:35:38.387 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:35:38.387209 | orchestrator | 23:35:38.387 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:35:38.387295 | orchestrator | 23:35:38.387 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:35:38.387401 | orchestrator | 23:35:38.387 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:35:38.387473 | orchestrator | 23:35:38.387 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:35:38.387530 | orchestrator | 23:35:38.387 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:35:38.387605 | orchestrator | 23:35:38.387 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-10 23:35:38.387696 | orchestrator | 23:35:38.387 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.387726 | orchestrator | 23:35:38.387 STDOUT terraform:  } 2025-03-10 23:35:38.387787 | orchestrator | 23:35:38.387 STDOUT terraform:  # local_file.inventory will be created 2025-03-10 23:35:38.387845 | orchestrator | 23:35:38.387 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-10 23:35:38.387933 | orchestrator | 23:35:38.387 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:35:38.388005 | orchestrator | 23:35:38.387 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:35:38.388073 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:35:38.388146 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:35:38.388215 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:35:38.388286 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:35:38.388355 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:35:38.388414 | orchestrator | 23:35:38.388 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:35:38.388461 | orchestrator | 23:35:38.388 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:35:38.388522 | orchestrator | 23:35:38.388 STDOUT terraform:  + filename = "inventory.ci" 2025-03-10 23:35:38.388592 | orchestrator | 23:35:38.388 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.388617 | orchestrator | 23:35:38.388 STDOUT terraform:  } 2025-03-10 23:35:38.388739 | orchestrator | 23:35:38.388 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-10 23:35:38.388798 | orchestrator | 23:35:38.388 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-10 23:35:38.388862 | orchestrator | 23:35:38.388 STDOUT terraform:  + content = (sensitive value) 2025-03-10 23:35:38.388933 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:35:38.389006 | orchestrator | 23:35:38.388 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:35:38.389074 | orchestrator | 23:35:38.389 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:35:38.389147 | orchestrator | 23:35:38.389 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:35:38.389212 | orchestrator | 23:35:38.389 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:35:38.389281 | orchestrator | 23:35:38.389 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:35:38.389326 | orchestrator | 23:35:38.389 STDOUT terraform:  + directory_permission = "0700" 2025-03-10 23:35:38.389373 | orchestrator | 23:35:38.389 STDOUT terraform:  + file_permission = "0600" 2025-03-10 23:35:38.389441 | orchestrator | 23:35:38.389 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-10 23:35:38.389513 | orchestrator | 23:35:38.389 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.389540 | orchestrator | 23:35:38.389 STDOUT terraform:  } 2025-03-10 23:35:38.389598 | orchestrator | 23:35:38.389 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-10 23:35:38.389656 | orchestrator | 23:35:38.389 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-10 23:35:38.389696 | orchestrator | 23:35:38.389 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.389722 | orchestrator | 23:35:38.389 STDOUT terraform:  } 2025-03-10 23:35:38.389821 | orchestrator | 23:35:38.389 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-10 23:35:38.389916 | orchestrator | 23:35:38.389 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-10 23:35:38.389978 | orchestrator | 23:35:38.389 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.390038 | orchestrator | 23:35:38.389 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.390098 | orchestrator | 23:35:38.390 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.390160 | orchestrator | 23:35:38.390 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.390220 | orchestrator | 23:35:38.390 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.390299 | orchestrator | 23:35:38.390 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-10 23:35:38.390360 | orchestrator | 23:35:38.390 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.390429 | orchestrator | 23:35:38.390 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.390469 | orchestrator | 23:35:38.390 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.390495 | orchestrator | 23:35:38.390 STDOUT terraform:  } 2025-03-10 23:35:38.390588 | orchestrator | 23:35:38.390 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-10 23:35:38.390669 | orchestrator | 23:35:38.390 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.390720 | orchestrator | 23:35:38.390 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.390755 | orchestrator | 23:35:38.390 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.390808 | orchestrator | 23:35:38.390 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.390857 | orchestrator | 23:35:38.390 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.390910 | orchestrator | 23:35:38.390 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.390975 | orchestrator | 23:35:38.390 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-10 23:35:38.391026 | orchestrator | 23:35:38.390 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.391059 | orchestrator | 23:35:38.391 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.391094 | orchestrator | 23:35:38.391 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.391116 | orchestrator | 23:35:38.391 STDOUT terraform:  } 2025-03-10 23:35:38.391194 | orchestrator | 23:35:38.391 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-10 23:35:38.391272 | orchestrator | 23:35:38.391 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.391324 | orchestrator | 23:35:38.391 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.391359 | orchestrator | 23:35:38.391 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.391421 | orchestrator | 23:35:38.391 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.391473 | orchestrator | 23:35:38.391 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.391523 | orchestrator | 23:35:38.391 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.391589 | orchestrator | 23:35:38.391 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-10 23:35:38.391645 | orchestrator | 23:35:38.391 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.391677 | orchestrator | 23:35:38.391 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.391712 | orchestrator | 23:35:38.391 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.391733 | orchestrator | 23:35:38.391 STDOUT terraform:  } 2025-03-10 23:35:38.391816 | orchestrator | 23:35:38.391 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-10 23:35:38.391892 | orchestrator | 23:35:38.391 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.391942 | orchestrator | 23:35:38.391 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.391977 | orchestrator | 23:35:38.391 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.392060 | orchestrator | 23:35:38.391 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.392113 | orchestrator | 23:35:38.392 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.392164 | orchestrator | 23:35:38.392 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.392230 | orchestrator | 23:35:38.392 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-10 23:35:38.392281 | orchestrator | 23:35:38.392 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.392316 | orchestrator | 23:35:38.392 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.392357 | orchestrator | 23:35:38.392 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.392403 | orchestrator | 23:35:38.392 STDOUT terraform:  } 2025-03-10 23:35:38.392469 | orchestrator | 23:35:38.392 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-10 23:35:38.392548 | orchestrator | 23:35:38.392 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.392597 | orchestrator | 23:35:38.392 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.392632 | orchestrator | 23:35:38.392 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.392684 | orchestrator | 23:35:38.392 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.392735 | orchestrator | 23:35:38.392 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.392787 | orchestrator | 23:35:38.392 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.392852 | orchestrator | 23:35:38.392 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-10 23:35:38.392904 | orchestrator | 23:35:38.392 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.392940 | orchestrator | 23:35:38.392 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.392975 | orchestrator | 23:35:38.392 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.392997 | orchestrator | 23:35:38.392 STDOUT terraform:  } 2025-03-10 23:35:38.393076 | orchestrator | 23:35:38.392 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-10 23:35:38.393153 | orchestrator | 23:35:38.393 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.393203 | orchestrator | 23:35:38.393 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.393240 | orchestrator | 23:35:38.393 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.393291 | orchestrator | 23:35:38.393 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.393344 | orchestrator | 23:35:38.393 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.393419 | orchestrator | 23:35:38.393 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.393484 | orchestrator | 23:35:38.393 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-10 23:35:38.393537 | orchestrator | 23:35:38.393 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.393571 | orchestrator | 23:35:38.393 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.393606 | orchestrator | 23:35:38.393 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.393626 | orchestrator | 23:35:38.393 STDOUT terraform:  } 2025-03-10 23:35:38.393701 | orchestrator | 23:35:38.393 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-10 23:35:38.393774 | orchestrator | 23:35:38.393 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:35:38.393821 | orchestrator | 23:35:38.393 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.393852 | orchestrator | 23:35:38.393 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.393902 | orchestrator | 23:35:38.393 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.393951 | orchestrator | 23:35:38.393 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.394000 | orchestrator | 23:35:38.393 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.394075 | orchestrator | 23:35:38.393 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-10 23:35:38.394123 | orchestrator | 23:35:38.394 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.394156 | orchestrator | 23:35:38.394 STDOUT terraform:  + size = 80 2025-03-10 23:35:38.394187 | orchestrator | 23:35:38.394 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.394208 | orchestrator | 23:35:38.394 STDOUT terraform:  } 2025-03-10 23:35:38.394278 | orchestrator | 23:35:38.394 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-10 23:35:38.394353 | orchestrator | 23:35:38.394 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.394409 | orchestrator | 23:35:38.394 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.394441 | orchestrator | 23:35:38.394 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.394494 | orchestrator | 23:35:38.394 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.394539 | orchestrator | 23:35:38.394 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.394598 | orchestrator | 23:35:38.394 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-10 23:35:38.394646 | orchestrator | 23:35:38.394 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.394678 | orchestrator | 23:35:38.394 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.394710 | orchestrator | 23:35:38.394 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.394729 | orchestrator | 23:35:38.394 STDOUT terraform:  } 2025-03-10 23:35:38.394800 | orchestrator | 23:35:38.394 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-10 23:35:38.394869 | orchestrator | 23:35:38.394 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.394917 | orchestrator | 23:35:38.394 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.394948 | orchestrator | 23:35:38.394 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.394998 | orchestrator | 23:35:38.394 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.395045 | orchestrator | 23:35:38.394 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.395103 | orchestrator | 23:35:38.395 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-10 23:35:38.395151 | orchestrator | 23:35:38.395 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.395182 | orchestrator | 23:35:38.395 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.395214 | orchestrator | 23:35:38.395 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.395235 | orchestrator | 23:35:38.395 STDOUT terraform:  } 2025-03-10 23:35:38.395306 | orchestrator | 23:35:38.395 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-10 23:35:38.395373 | orchestrator | 23:35:38.395 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.395431 | orchestrator | 23:35:38.395 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.395462 | orchestrator | 23:35:38.395 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.395512 | orchestrator | 23:35:38.395 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.395563 | orchestrator | 23:35:38.395 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.395622 | orchestrator | 23:35:38.395 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-10 23:35:38.395671 | orchestrator | 23:35:38.395 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.395702 | orchestrator | 23:35:38.395 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.395733 | orchestrator | 23:35:38.395 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.395753 | orchestrator | 23:35:38.395 STDOUT terraform:  } 2025-03-10 23:35:38.395824 | orchestrator | 23:35:38.395 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-10 23:35:38.395893 | orchestrator | 23:35:38.395 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.395941 | orchestrator | 23:35:38.395 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.395973 | orchestrator | 23:35:38.395 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.396024 | orchestrator | 23:35:38.395 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.396071 | orchestrator | 23:35:38.396 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.396130 | orchestrator | 23:35:38.396 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-10 23:35:38.396178 | orchestrator | 23:35:38.396 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.396209 | orchestrator | 23:35:38.396 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.396241 | orchestrator | 23:35:38.396 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.396261 | orchestrator | 23:35:38.396 STDOUT terraform:  } 2025-03-10 23:35:38.396332 | orchestrator | 23:35:38.396 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-10 23:35:38.396409 | orchestrator | 23:35:38.396 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.396456 | orchestrator | 23:35:38.396 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.396489 | orchestrator | 23:35:38.396 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.396536 | orchestrator | 23:35:38.396 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.396585 | orchestrator | 23:35:38.396 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.396643 | orchestrator | 23:35:38.396 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-10 23:35:38.396691 | orchestrator | 23:35:38.396 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.396724 | orchestrator | 23:35:38.396 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.396758 | orchestrator | 23:35:38.396 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.396778 | orchestrator | 23:35:38.396 STDOUT terraform:  } 2025-03-10 23:35:38.396851 | orchestrator | 23:35:38.396 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-10 23:35:38.396921 | orchestrator | 23:35:38.396 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.396970 | orchestrator | 23:35:38.396 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.397003 | orchestrator | 23:35:38.396 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.397050 | orchestrator | 23:35:38.396 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.397101 | orchestrator | 23:35:38.397 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.397156 | orchestrator | 23:35:38.397 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-10 23:35:38.397204 | orchestrator | 23:35:38.397 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.397236 | orchestrator | 23:35:38.397 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.397269 | orchestrator | 23:35:38.397 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.397290 | orchestrator | 23:35:38.397 STDOUT terraform:  } 2025-03-10 23:35:38.397360 | orchestrator | 23:35:38.397 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-10 23:35:38.397451 | orchestrator | 23:35:38.397 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.397501 | orchestrator | 23:35:38.397 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.397534 | orchestrator | 23:35:38.397 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.397583 | orchestrator | 23:35:38.397 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.397631 | orchestrator | 23:35:38.397 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.397691 | orchestrator | 23:35:38.397 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-10 23:35:38.397739 | orchestrator | 23:35:38.397 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.397771 | orchestrator | 23:35:38.397 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.397805 | orchestrator | 23:35:38.397 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.397823 | orchestrator | 23:35:38.397 STDOUT terraform:  } 2025-03-10 23:35:38.397893 | orchestrator | 23:35:38.397 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-10 23:35:38.397959 | orchestrator | 23:35:38.397 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.398001 | orchestrator | 23:35:38.397 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.398043 | orchestrator | 23:35:38.397 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.398086 | orchestrator | 23:35:38.398 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.398130 | orchestrator | 23:35:38.398 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.398182 | orchestrator | 23:35:38.398 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-10 23:35:38.398225 | orchestrator | 23:35:38.398 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.398254 | orchestrator | 23:35:38.398 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.398282 | orchestrator | 23:35:38.398 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.398300 | orchestrator | 23:35:38.398 STDOUT terraform:  } 2025-03-10 23:35:38.398362 | orchestrator | 23:35:38.398 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-10 23:35:38.398432 | orchestrator | 23:35:38.398 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.398472 | orchestrator | 23:35:38.398 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.398501 | orchestrator | 23:35:38.398 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.398543 | orchestrator | 23:35:38.398 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.398584 | orchestrator | 23:35:38.398 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.398636 | orchestrator | 23:35:38.398 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-10 23:35:38.398678 | orchestrator | 23:35:38.398 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.398705 | orchestrator | 23:35:38.398 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.398734 | orchestrator | 23:35:38.398 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.398754 | orchestrator | 23:35:38.398 STDOUT terraform:  } 2025-03-10 23:35:38.398817 | orchestrator | 23:35:38.398 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-10 23:35:38.398875 | orchestrator | 23:35:38.398 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.398917 | orchestrator | 23:35:38.398 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.398946 | orchestrator | 23:35:38.398 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.398988 | orchestrator | 23:35:38.398 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.399031 | orchestrator | 23:35:38.398 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.399082 | orchestrator | 23:35:38.399 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-10 23:35:38.399123 | orchestrator | 23:35:38.399 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.399150 | orchestrator | 23:35:38.399 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.399180 | orchestrator | 23:35:38.399 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.399198 | orchestrator | 23:35:38.399 STDOUT terraform:  } 2025-03-10 23:35:38.399261 | orchestrator | 23:35:38.399 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-10 23:35:38.399320 | orchestrator | 23:35:38.399 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.399362 | orchestrator | 23:35:38.399 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.399399 | orchestrator | 23:35:38.399 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.399443 | orchestrator | 23:35:38.399 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.399488 | orchestrator | 23:35:38.399 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.399538 | orchestrator | 23:35:38.399 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-10 23:35:38.399580 | orchestrator | 23:35:38.399 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.399607 | orchestrator | 23:35:38.399 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.399636 | orchestrator | 23:35:38.399 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.399655 | orchestrator | 23:35:38.399 STDOUT terraform:  } 2025-03-10 23:35:38.399718 | orchestrator | 23:35:38.399 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-10 23:35:38.399777 | orchestrator | 23:35:38.399 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.399818 | orchestrator | 23:35:38.399 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.399846 | orchestrator | 23:35:38.399 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.399890 | orchestrator | 23:35:38.399 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.399932 | orchestrator | 23:35:38.399 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.399984 | orchestrator | 23:35:38.399 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-10 23:35:38.400029 | orchestrator | 23:35:38.399 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.400058 | orchestrator | 23:35:38.400 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.400086 | orchestrator | 23:35:38.400 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.400104 | orchestrator | 23:35:38.400 STDOUT terraform:  } 2025-03-10 23:35:38.400166 | orchestrator | 23:35:38.400 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-10 23:35:38.400225 | orchestrator | 23:35:38.400 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.400267 | orchestrator | 23:35:38.400 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.400300 | orchestrator | 23:35:38.400 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.400343 | orchestrator | 23:35:38.400 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.400393 | orchestrator | 23:35:38.400 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.400443 | orchestrator | 23:35:38.400 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-10 23:35:38.400484 | orchestrator | 23:35:38.400 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.400512 | orchestrator | 23:35:38.400 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.400544 | orchestrator | 23:35:38.400 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.400560 | orchestrator | 23:35:38.400 STDOUT terraform:  } 2025-03-10 23:35:38.400623 | orchestrator | 23:35:38.400 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-10 23:35:38.400682 | orchestrator | 23:35:38.400 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.400730 | orchestrator | 23:35:38.400 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.400757 | orchestrator | 23:35:38.400 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.400800 | orchestrator | 23:35:38.400 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.400842 | orchestrator | 23:35:38.400 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.400894 | orchestrator | 23:35:38.400 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-10 23:35:38.400936 | orchestrator | 23:35:38.400 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.400963 | orchestrator | 23:35:38.400 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.400993 | orchestrator | 23:35:38.400 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.401010 | orchestrator | 23:35:38.400 STDOUT terraform:  } 2025-03-10 23:35:38.401073 | orchestrator | 23:35:38.401 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-10 23:35:38.401133 | orchestrator | 23:35:38.401 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.401175 | orchestrator | 23:35:38.401 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.401203 | orchestrator | 23:35:38.401 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.401247 | orchestrator | 23:35:38.401 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.401288 | orchestrator | 23:35:38.401 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.401341 | orchestrator | 23:35:38.401 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-10 23:35:38.401399 | orchestrator | 23:35:38.401 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.401430 | orchestrator | 23:35:38.401 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.401458 | orchestrator | 23:35:38.401 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.401476 | orchestrator | 23:35:38.401 STDOUT terraform:  } 2025-03-10 23:35:38.401539 | orchestrator | 23:35:38.401 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-10 23:35:38.401598 | orchestrator | 23:35:38.401 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.401641 | orchestrator | 23:35:38.401 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.401669 | orchestrator | 23:35:38.401 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.401713 | orchestrator | 23:35:38.401 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.401758 | orchestrator | 23:35:38.401 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.401808 | orchestrator | 23:35:38.401 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-10 23:35:38.401845 | orchestrator | 23:35:38.401 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.401869 | orchestrator | 23:35:38.401 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.401896 | orchestrator | 23:35:38.401 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.401912 | orchestrator | 23:35:38.401 STDOUT terraform:  } 2025-03-10 23:35:38.401968 | orchestrator | 23:35:38.401 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-10 23:35:38.402040 | orchestrator | 23:35:38.401 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.402069 | orchestrator | 23:35:38.402 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.402092 | orchestrator | 23:35:38.402 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.402132 | orchestrator | 23:35:38.402 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.402168 | orchestrator | 23:35:38.402 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.402215 | orchestrator | 23:35:38.402 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-10 23:35:38.402253 | orchestrator | 23:35:38.402 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.402278 | orchestrator | 23:35:38.402 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.402302 | orchestrator | 23:35:38.402 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.402318 | orchestrator | 23:35:38.402 STDOUT terraform:  } 2025-03-10 23:35:38.402402 | orchestrator | 23:35:38.402 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-10 23:35:38.402435 | orchestrator | 23:35:38.402 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:35:38.402472 | orchestrator | 23:35:38.402 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:35:38.402498 | orchestrator | 23:35:38.402 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.402536 | orchestrator | 23:35:38.402 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.402574 | orchestrator | 23:35:38.402 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:35:38.402621 | orchestrator | 23:35:38.402 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-10 23:35:38.402660 | orchestrator | 23:35:38.402 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.402684 | orchestrator | 23:35:38.402 STDOUT terraform:  + size = 20 2025-03-10 23:35:38.402709 | orchestrator | 23:35:38.402 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:35:38.402728 | orchestrator | 23:35:38.402 STDOUT terraform:  } 2025-03-10 23:35:38.402782 | orchestrator | 23:35:38.402 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-10 23:35:38.402835 | orchestrator | 23:35:38.402 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-10 23:35:38.402876 | orchestrator | 23:35:38.402 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.402920 | orchestrator | 23:35:38.402 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.402963 | orchestrator | 23:35:38.402 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.403007 | orchestrator | 23:35:38.402 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.403036 | orchestrator | 23:35:38.403 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.403062 | orchestrator | 23:35:38.403 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.403106 | orchestrator | 23:35:38.403 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.403149 | orchestrator | 23:35:38.403 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.403185 | orchestrator | 23:35:38.403 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-10 23:35:38.403214 | orchestrator | 23:35:38.403 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.403258 | orchestrator | 23:35:38.403 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.403302 | orchestrator | 23:35:38.403 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.403345 | orchestrator | 23:35:38.403 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.403385 | orchestrator | 23:35:38.403 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.403432 | orchestrator | 23:35:38.403 STDOUT terraform:  + name = "testbed-manager" 2025-03-10 23:35:38.403463 | orchestrator | 23:35:38.403 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.403506 | orchestrator | 23:35:38.403 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.403549 | orchestrator | 23:35:38.403 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.403580 | orchestrator | 23:35:38.403 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.403620 | orchestrator | 23:35:38.403 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.403667 | orchestrator | 23:35:38.403 STDOUT terraform:  + user_data = (known after apply) 2025-03-10 23:35:38.403687 | orchestrator | 23:35:38.403 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.403717 | orchestrator | 23:35:38.403 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.403751 | orchestrator | 23:35:38.403 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.403786 | orchestrator | 23:35:38.403 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.403823 | orchestrator | 23:35:38.403 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.403860 | orchestrator | 23:35:38.403 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.403907 | orchestrator | 23:35:38.403 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.403928 | orchestrator | 23:35:38.403 STDOUT terraform:  } 2025-03-10 23:35:38.403936 | orchestrator | 23:35:38.403 STDOUT terraform:  + network { 2025-03-10 23:35:38.403962 | orchestrator | 23:35:38.403 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.403999 | orchestrator | 23:35:38.403 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.404035 | orchestrator | 23:35:38.403 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.404071 | orchestrator | 23:35:38.404 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.404108 | orchestrator | 23:35:38.404 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.404144 | orchestrator | 23:35:38.404 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.404181 | orchestrator | 23:35:38.404 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.404197 | orchestrator | 23:35:38.404 STDOUT terraform:  } 2025-03-10 23:35:38.404204 | orchestrator | 23:35:38.404 STDOUT terraform:  } 2025-03-10 23:35:38.404260 | orchestrator | 23:35:38.404 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-10 23:35:38.404313 | orchestrator | 23:35:38.404 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.404353 | orchestrator | 23:35:38.404 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.404400 | orchestrator | 23:35:38.404 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.404440 | orchestrator | 23:35:38.404 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.404481 | orchestrator | 23:35:38.404 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.404508 | orchestrator | 23:35:38.404 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.404532 | orchestrator | 23:35:38.404 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.404574 | orchestrator | 23:35:38.404 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.404615 | orchestrator | 23:35:38.404 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.404649 | orchestrator | 23:35:38.404 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.404676 | orchestrator | 23:35:38.404 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.404719 | orchestrator | 23:35:38.404 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.404760 | orchestrator | 23:35:38.404 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.404802 | orchestrator | 23:35:38.404 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.404832 | orchestrator | 23:35:38.404 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.404868 | orchestrator | 23:35:38.404 STDOUT terraform:  + name = "testbed-node-0" 2025-03-10 23:35:38.404897 | orchestrator | 23:35:38.404 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.404940 | orchestrator | 23:35:38.404 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.404979 | orchestrator | 23:35:38.404 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.405007 | orchestrator | 23:35:38.404 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.405047 | orchestrator | 23:35:38.405 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.405107 | orchestrator | 23:35:38.405 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.405127 | orchestrator | 23:35:38.405 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.405155 | orchestrator | 23:35:38.405 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.405187 | orchestrator | 23:35:38.405 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.405221 | orchestrator | 23:35:38.405 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.405254 | orchestrator | 23:35:38.405 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.405289 | orchestrator | 23:35:38.405 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.405335 | orchestrator | 23:35:38.405 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.405351 | orchestrator | 23:35:38.405 STDOUT terraform:  } 2025-03-10 23:35:38.405367 | orchestrator | 23:35:38.405 STDOUT terraform:  + network { 2025-03-10 23:35:38.405409 | orchestrator | 23:35:38.405 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.405435 | orchestrator | 23:35:38.405 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.405472 | orchestrator | 23:35:38.405 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.405510 | orchestrator | 23:35:38.405 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.405546 | orchestrator | 23:35:38.405 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.405585 | orchestrator | 23:35:38.405 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.405619 | orchestrator | 23:35:38.405 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.405631 | orchestrator | 23:35:38.405 STDOUT terraform:  } 2025-03-10 23:35:38.405647 | orchestrator | 23:35:38.405 STDOUT terraform:  } 2025-03-10 23:35:38.405698 | orchestrator | 23:35:38.405 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-10 23:35:38.405748 | orchestrator | 23:35:38.405 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.405790 | orchestrator | 23:35:38.405 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.405831 | orchestrator | 23:35:38.405 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.405872 | orchestrator | 23:35:38.405 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.405913 | orchestrator | 23:35:38.405 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.405939 | orchestrator | 23:35:38.405 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.405964 | orchestrator | 23:35:38.405 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.406006 | orchestrator | 23:35:38.405 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.406060 | orchestrator | 23:35:38.406 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.406098 | orchestrator | 23:35:38.406 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.406122 | orchestrator | 23:35:38.406 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.406163 | orchestrator | 23:35:38.406 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.406205 | orchestrator | 23:35:38.406 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.406246 | orchestrator | 23:35:38.406 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.406274 | orchestrator | 23:35:38.406 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.406311 | orchestrator | 23:35:38.406 STDOUT terraform:  + name = "testbed-node-1" 2025-03-10 23:35:38.406339 | orchestrator | 23:35:38.406 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.406389 | orchestrator | 23:35:38.406 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.406438 | orchestrator | 23:35:38.406 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.406466 | orchestrator | 23:35:38.406 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.406507 | orchestrator | 23:35:38.406 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.406565 | orchestrator | 23:35:38.406 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.406586 | orchestrator | 23:35:38.406 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.406614 | orchestrator | 23:35:38.406 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.406647 | orchestrator | 23:35:38.406 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.406681 | orchestrator | 23:35:38.406 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.406715 | orchestrator | 23:35:38.406 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.406750 | orchestrator | 23:35:38.406 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.406797 | orchestrator | 23:35:38.406 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.406812 | orchestrator | 23:35:38.406 STDOUT terraform:  } 2025-03-10 23:35:38.406827 | orchestrator | 23:35:38.406 STDOUT terraform:  + network { 2025-03-10 23:35:38.406849 | orchestrator | 23:35:38.406 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.406883 | orchestrator | 23:35:38.406 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.406914 | orchestrator | 23:35:38.406 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.406948 | orchestrator | 23:35:38.406 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.406981 | orchestrator | 23:35:38.406 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.407015 | orchestrator | 23:35:38.406 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.407047 | orchestrator | 23:35:38.407 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.407054 | orchestrator | 23:35:38.407 STDOUT terraform:  } 2025-03-10 23:35:38.407074 | orchestrator | 23:35:38.407 STDOUT terraform:  } 2025-03-10 23:35:38.407119 | orchestrator | 23:35:38.407 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-10 23:35:38.407164 | orchestrator | 23:35:38.407 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.407200 | orchestrator | 23:35:38.407 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.407238 | orchestrator | 23:35:38.407 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.407279 | orchestrator | 23:35:38.407 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.407313 | orchestrator | 23:35:38.407 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.407337 | orchestrator | 23:35:38.407 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.407363 | orchestrator | 23:35:38.407 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.407413 | orchestrator | 23:35:38.407 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.410783 | orchestrator | 23:35:38.407 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.410838 | orchestrator | 23:35:38.407 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.410845 | orchestrator | 23:35:38.407 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.410860 | orchestrator | 23:35:38.407 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.410865 | orchestrator | 23:35:38.407 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.410870 | orchestrator | 23:35:38.407 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.410875 | orchestrator | 23:35:38.407 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.410880 | orchestrator | 23:35:38.407 STDOUT terraform:  + name = "testbed-node-2" 2025-03-10 23:35:38.410895 | orchestrator | 23:35:38.407 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.410901 | orchestrator | 23:35:38.407 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.410906 | orchestrator | 23:35:38.407 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.410911 | orchestrator | 23:35:38.407 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.410915 | orchestrator | 23:35:38.407 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.410920 | orchestrator | 23:35:38.407 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.410926 | orchestrator | 23:35:38.407 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.410931 | orchestrator | 23:35:38.407 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.410936 | orchestrator | 23:35:38.407 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.410941 | orchestrator | 23:35:38.407 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.410946 | orchestrator | 23:35:38.408 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.410951 | orchestrator | 23:35:38.408 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.410956 | orchestrator | 23:35:38.408 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.410961 | orchestrator | 23:35:38.408 STDOUT terraform:  } 2025-03-10 23:35:38.410966 | orchestrator | 23:35:38.408 STDOUT terraform:  + network { 2025-03-10 23:35:38.410971 | orchestrator | 23:35:38.408 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.410976 | orchestrator | 23:35:38.408 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.410981 | orchestrator | 23:35:38.408 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.410986 | orchestrator | 23:35:38.408 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.410991 | orchestrator | 23:35:38.408 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.410996 | orchestrator | 23:35:38.408 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.411001 | orchestrator | 23:35:38.408 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411006 | orchestrator | 23:35:38.408 STDOUT terraform:  } 2025-03-10 23:35:38.411010 | orchestrator | 23:35:38.408 STDOUT terraform:  } 2025-03-10 23:35:38.411015 | orchestrator | 23:35:38.408 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-10 23:35:38.411020 | orchestrator | 23:35:38.408 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.411025 | orchestrator | 23:35:38.408 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.411030 | orchestrator | 23:35:38.408 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.411035 | orchestrator | 23:35:38.408 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.411040 | orchestrator | 23:35:38.408 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.411057 | orchestrator | 23:35:38.408 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.411063 | orchestrator | 23:35:38.408 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.411069 | orchestrator | 23:35:38.408 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.411074 | orchestrator | 23:35:38.408 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.411081 | orchestrator | 23:35:38.408 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.411086 | orchestrator | 23:35:38.408 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.411091 | orchestrator | 23:35:38.408 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.411097 | orchestrator | 23:35:38.408 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.411102 | orchestrator | 23:35:38.408 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.411107 | orchestrator | 23:35:38.408 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.411112 | orchestrator | 23:35:38.408 STDOUT terraform:  + name = "testbed-node-3" 2025-03-10 23:35:38.411117 | orchestrator | 23:35:38.408 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.411122 | orchestrator | 23:35:38.408 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.411127 | orchestrator | 23:35:38.408 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.411132 | orchestrator | 23:35:38.408 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.411137 | orchestrator | 23:35:38.408 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.411142 | orchestrator | 23:35:38.408 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.411147 | orchestrator | 23:35:38.408 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.411152 | orchestrator | 23:35:38.408 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.411156 | orchestrator | 23:35:38.409 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.411161 | orchestrator | 23:35:38.409 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.411166 | orchestrator | 23:35:38.409 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.411171 | orchestrator | 23:35:38.409 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.411176 | orchestrator | 23:35:38.409 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411181 | orchestrator | 23:35:38.409 STDOUT terraform:  } 2025-03-10 23:35:38.411186 | orchestrator | 23:35:38.409 STDOUT terraform:  + network { 2025-03-10 23:35:38.411191 | orchestrator | 23:35:38.409 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.411196 | orchestrator | 23:35:38.409 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.411201 | orchestrator | 23:35:38.409 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.411205 | orchestrator | 23:35:38.409 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.411213 | orchestrator | 23:35:38.409 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.411218 | orchestrator | 23:35:38.409 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.411223 | orchestrator | 23:35:38.409 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411227 | orchestrator | 23:35:38.409 STDOUT terraform:  } 2025-03-10 23:35:38.411232 | orchestrator | 23:35:38.409 STDOUT terraform:  } 2025-03-10 23:35:38.411237 | orchestrator | 23:35:38.409 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-10 23:35:38.411242 | orchestrator | 23:35:38.409 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.411247 | orchestrator | 23:35:38.409 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.411255 | orchestrator | 23:35:38.409 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.411260 | orchestrator | 23:35:38.409 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.411265 | orchestrator | 23:35:38.409 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.411270 | orchestrator | 23:35:38.409 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.411275 | orchestrator | 23:35:38.409 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.411280 | orchestrator | 23:35:38.409 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.411285 | orchestrator | 23:35:38.409 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.411290 | orchestrator | 23:35:38.409 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.411295 | orchestrator | 23:35:38.409 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.411300 | orchestrator | 23:35:38.409 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.411305 | orchestrator | 23:35:38.409 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.411309 | orchestrator | 23:35:38.409 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.411315 | orchestrator | 23:35:38.409 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.411319 | orchestrator | 23:35:38.409 STDOUT terraform:  + name = "testbed-node-4" 2025-03-10 23:35:38.411324 | orchestrator | 23:35:38.409 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.411329 | orchestrator | 23:35:38.409 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.411334 | orchestrator | 23:35:38.409 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.411339 | orchestrator | 23:35:38.409 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.411344 | orchestrator | 23:35:38.409 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.411349 | orchestrator | 23:35:38.409 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.411353 | orchestrator | 23:35:38.410 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.411358 | orchestrator | 23:35:38.410 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.411366 | orchestrator | 23:35:38.410 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.411371 | orchestrator | 23:35:38.410 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.411402 | orchestrator | 23:35:38.410 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.411408 | orchestrator | 23:35:38.410 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.411413 | orchestrator | 23:35:38.410 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411418 | orchestrator | 23:35:38.410 STDOUT terraform:  } 2025-03-10 23:35:38.411423 | orchestrator | 23:35:38.410 STDOUT terraform:  + network { 2025-03-10 23:35:38.411428 | orchestrator | 23:35:38.410 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.411435 | orchestrator | 23:35:38.410 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.411440 | orchestrator | 23:35:38.410 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.411445 | orchestrator | 23:35:38.410 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.411450 | orchestrator | 23:35:38.410 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.411455 | orchestrator | 23:35:38.410 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.411460 | orchestrator | 23:35:38.410 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411465 | orchestrator | 23:35:38.410 STDOUT terraform:  } 2025-03-10 23:35:38.411470 | orchestrator | 23:35:38.410 STDOUT terraform:  } 2025-03-10 23:35:38.411478 | orchestrator | 23:35:38.410 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-10 23:35:38.411495 | orchestrator | 23:35:38.410 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:35:38.411501 | orchestrator | 23:35:38.410 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:35:38.411506 | orchestrator | 23:35:38.410 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:35:38.411511 | orchestrator | 23:35:38.410 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:35:38.411516 | orchestrator | 23:35:38.410 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.411521 | orchestrator | 23:35:38.410 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:35:38.411526 | orchestrator | 23:35:38.410 STDOUT terraform:  + config_drive = true 2025-03-10 23:35:38.411531 | orchestrator | 23:35:38.410 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:35:38.411535 | orchestrator | 23:35:38.410 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:35:38.411543 | orchestrator | 23:35:38.410 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:35:38.411548 | orchestrator | 23:35:38.410 STDOUT terraform:  + force_delete = false 2025-03-10 23:35:38.411553 | orchestrator | 23:35:38.410 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.411558 | orchestrator | 23:35:38.410 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:35:38.411566 | orchestrator | 23:35:38.410 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:35:38.411571 | orchestrator | 23:35:38.410 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:35:38.411576 | orchestrator | 23:35:38.410 STDOUT terraform:  + name = "testbed-node-5" 2025-03-10 23:35:38.411581 | orchestrator | 23:35:38.410 STDOUT terraform:  + power_state = "active" 2025-03-10 23:35:38.411586 | orchestrator | 23:35:38.410 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.411590 | orchestrator | 23:35:38.410 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:35:38.411595 | orchestrator | 23:35:38.411 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:35:38.411600 | orchestrator | 23:35:38.411 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:35:38.411605 | orchestrator | 23:35:38.411 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:35:38.411610 | orchestrator | 23:35:38.411 STDOUT terraform:  + block_device { 2025-03-10 23:35:38.411615 | orchestrator | 23:35:38.411 STDOUT terraform:  + boot_index = 0 2025-03-10 23:35:38.411620 | orchestrator | 23:35:38.411 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:35:38.411625 | orchestrator | 23:35:38.411 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:35:38.411630 | orchestrator | 23:35:38.411 STDOUT terraform:  + multiattach = false 2025-03-10 23:35:38.411634 | orchestrator | 23:35:38.411 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:35:38.411640 | orchestrator | 23:35:38.411 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411644 | orchestrator | 23:35:38.411 STDOUT terraform:  } 2025-03-10 23:35:38.411649 | orchestrator | 23:35:38.411 STDOUT terraform:  + network { 2025-03-10 23:35:38.411654 | orchestrator | 23:35:38.411 STDOUT terraform:  + access_network = false 2025-03-10 23:35:38.411659 | orchestrator | 23:35:38.411 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:35:38.411664 | orchestrator | 23:35:38.411 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:35:38.411669 | orchestrator | 23:35:38.411 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:35:38.411674 | orchestrator | 23:35:38.411 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:35:38.411679 | orchestrator | 23:35:38.411 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:35:38.411686 | orchestrator | 23:35:38.411 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:35:38.411714 | orchestrator | 23:35:38.411 STDOUT terraform:  } 2025-03-10 23:35:38.411721 | orchestrator | 23:35:38.411 STDOUT terraform:  } 2025-03-10 23:35:38.411730 | orchestrator | 23:35:38.411 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-10 23:35:38.411736 | orchestrator | 23:35:38.411 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-10 23:35:38.411741 | orchestrator | 23:35:38.411 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-10 23:35:38.411746 | orchestrator | 23:35:38.411 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.411754 | orchestrator | 23:35:38.411 STDOUT terraform:  + name = "testbed" 2025-03-10 23:35:38.411759 | orchestrator | 23:35:38.411 STDOUT terraform:  + private_key = (sensitive value) 2025-03-10 23:35:38.411764 | orchestrator | 23:35:38.411 STDOUT terraform:  + public_key = (known after apply) 2025-03-10 23:35:38.411771 | orchestrator | 23:35:38.411 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.411792 | orchestrator | 23:35:38.411 STDOUT terraform:  + user_id = (known after apply) 2025-03-10 23:35:38.411798 | orchestrator | 23:35:38.411 STDOUT terraform:  } 2025-03-10 23:35:38.411804 | orchestrator | 23:35:38.411 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-10 23:35:38.411845 | orchestrator | 23:35:38.411 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.411873 | orchestrator | 23:35:38.411 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.411908 | orchestrator | 23:35:38.411 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.411933 | orchestrator | 23:35:38.411 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.411959 | orchestrator | 23:35:38.411 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.411983 | orchestrator | 23:35:38.411 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.411990 | orchestrator | 23:35:38.411 STDOUT terraform:  } 2025-03-10 23:35:38.412042 | orchestrator | 23:35:38.411 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-10 23:35:38.412091 | orchestrator | 23:35:38.412 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.412117 | orchestrator | 23:35:38.412 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.412147 | orchestrator | 23:35:38.412 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.412175 | orchestrator | 23:35:38.412 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.412198 | orchestrator | 23:35:38.412 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.412229 | orchestrator | 23:35:38.412 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.412246 | orchestrator | 23:35:38.412 STDOUT terraform:  } 2025-03-10 23:35:38.412285 | orchestrator | 23:35:38.412 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-10 23:35:38.412333 | orchestrator | 23:35:38.412 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.412360 | orchestrator | 23:35:38.412 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.412395 | orchestrator | 23:35:38.412 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.412423 | orchestrator | 23:35:38.412 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.412450 | orchestrator | 23:35:38.412 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.412479 | orchestrator | 23:35:38.412 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.412490 | orchestrator | 23:35:38.412 STDOUT terraform:  } 2025-03-10 23:35:38.412537 | orchestrator | 23:35:38.412 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-10 23:35:38.412584 | orchestrator | 23:35:38.412 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.412612 | orchestrator | 23:35:38.412 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.412640 | orchestrator | 23:35:38.412 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.412667 | orchestrator | 23:35:38.412 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.412695 | orchestrator | 23:35:38.412 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.412723 | orchestrator | 23:35:38.412 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.412730 | orchestrator | 23:35:38.412 STDOUT terraform:  } 2025-03-10 23:35:38.412781 | orchestrator | 23:35:38.412 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-10 23:35:38.412828 | orchestrator | 23:35:38.412 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.412857 | orchestrator | 23:35:38.412 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.412885 | orchestrator | 23:35:38.412 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.412912 | orchestrator | 23:35:38.412 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.412940 | orchestrator | 23:35:38.412 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.412968 | orchestrator | 23:35:38.412 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.412975 | orchestrator | 23:35:38.412 STDOUT terraform:  } 2025-03-10 23:35:38.413025 | orchestrator | 23:35:38.412 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-10 23:35:38.413074 | orchestrator | 23:35:38.413 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.413101 | orchestrator | 23:35:38.413 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.413130 | orchestrator | 23:35:38.413 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.413158 | orchestrator | 23:35:38.413 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.413186 | orchestrator | 23:35:38.413 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.413215 | orchestrator | 23:35:38.413 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.413222 | orchestrator | 23:35:38.413 STDOUT terraform:  } 2025-03-10 23:35:38.413272 | orchestrator | 23:35:38.413 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-10 23:35:38.413320 | orchestrator | 23:35:38.413 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.413347 | orchestrator | 23:35:38.413 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.413375 | orchestrator | 23:35:38.413 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.413411 | orchestrator | 23:35:38.413 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.413439 | orchestrator | 23:35:38.413 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.413467 | orchestrator | 23:35:38.413 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.413474 | orchestrator | 23:35:38.413 STDOUT terraform:  } 2025-03-10 23:35:38.413524 | orchestrator | 23:35:38.413 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-10 23:35:38.413571 | orchestrator | 23:35:38.413 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.413600 | orchestrator | 23:35:38.413 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.413628 | orchestrator | 23:35:38.413 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.413655 | orchestrator | 23:35:38.413 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.413686 | orchestrator | 23:35:38.413 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.413711 | orchestrator | 23:35:38.413 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.413718 | orchestrator | 23:35:38.413 STDOUT terraform:  } 2025-03-10 23:35:38.413769 | orchestrator | 23:35:38.413 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-10 23:35:38.413818 | orchestrator | 23:35:38.413 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.413845 | orchestrator | 23:35:38.413 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.413880 | orchestrator | 23:35:38.413 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.413902 | orchestrator | 23:35:38.413 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.413929 | orchestrator | 23:35:38.413 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.413957 | orchestrator | 23:35:38.413 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.414027 | orchestrator | 23:35:38.413 STDOUT terraform:  } 2025-03-10 23:35:38.414037 | orchestrator | 23:35:38.413 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-10 23:35:38.414081 | orchestrator | 23:35:38.414 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.414110 | orchestrator | 23:35:38.414 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.414138 | orchestrator | 23:35:38.414 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.414167 | orchestrator | 23:35:38.414 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.414196 | orchestrator | 23:35:38.414 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.414224 | orchestrator | 23:35:38.414 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.414288 | orchestrator | 23:35:38.414 STDOUT terraform:  } 2025-03-10 23:35:38.414296 | orchestrator | 23:35:38.414 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-10 23:35:38.414330 | orchestrator | 23:35:38.414 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.414356 | orchestrator | 23:35:38.414 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.414392 | orchestrator | 23:35:38.414 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.414424 | orchestrator | 23:35:38.414 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.414454 | orchestrator | 23:35:38.414 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.414481 | orchestrator | 23:35:38.414 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.414539 | orchestrator | 23:35:38.414 STDOUT terraform:  } 2025-03-10 23:35:38.414546 | orchestrator | 23:35:38.414 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-10 23:35:38.414588 | orchestrator | 23:35:38.414 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.414617 | orchestrator | 23:35:38.414 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.414646 | orchestrator | 23:35:38.414 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.414672 | orchestrator | 23:35:38.414 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.414695 | orchestrator | 23:35:38.414 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.414727 | orchestrator | 23:35:38.414 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.414781 | orchestrator | 23:35:38.414 STDOUT terraform:  } 2025-03-10 23:35:38.414789 | orchestrator | 23:35:38.414 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-10 23:35:38.414832 | orchestrator | 23:35:38.414 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.414863 | orchestrator | 23:35:38.414 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.414890 | orchestrator | 23:35:38.414 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.414915 | orchestrator | 23:35:38.414 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.414943 | orchestrator | 23:35:38.414 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.414980 | orchestrator | 23:35:38.414 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.415027 | orchestrator | 23:35:38.414 STDOUT terraform:  } 2025-03-10 23:35:38.415035 | orchestrator | 23:35:38.414 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-10 23:35:38.415079 | orchestrator | 23:35:38.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.415106 | orchestrator | 23:35:38.415 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.415139 | orchestrator | 23:35:38.415 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.415165 | orchestrator | 23:35:38.415 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.415193 | orchestrator | 23:35:38.415 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.415220 | orchestrator | 23:35:38.415 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.415231 | orchestrator | 23:35:38.415 STDOUT terraform:  } 2025-03-10 23:35:38.415278 | orchestrator | 23:35:38.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-10 23:35:38.415328 | orchestrator | 23:35:38.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.415356 | orchestrator | 23:35:38.415 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.415394 | orchestrator | 23:35:38.415 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.415419 | orchestrator | 23:35:38.415 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.415447 | orchestrator | 23:35:38.415 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.415475 | orchestrator | 23:35:38.415 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.415486 | orchestrator | 23:35:38.415 STDOUT terraform:  } 2025-03-10 23:35:38.415532 | orchestrator | 23:35:38.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-10 23:35:38.415580 | orchestrator | 23:35:38.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.415608 | orchestrator | 23:35:38.415 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.415639 | orchestrator | 23:35:38.415 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.415664 | orchestrator | 23:35:38.415 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.415691 | orchestrator | 23:35:38.415 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.415718 | orchestrator | 23:35:38.415 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.415726 | orchestrator | 23:35:38.415 STDOUT terraform:  } 2025-03-10 23:35:38.415783 | orchestrator | 23:35:38.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-10 23:35:38.415830 | orchestrator | 23:35:38.415 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.415858 | orchestrator | 23:35:38.415 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.415888 | orchestrator | 23:35:38.415 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.415915 | orchestrator | 23:35:38.415 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.415944 | orchestrator | 23:35:38.415 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.415971 | orchestrator | 23:35:38.415 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.416027 | orchestrator | 23:35:38.415 STDOUT terraform:  } 2025-03-10 23:35:38.416034 | orchestrator | 23:35:38.415 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-10 23:35:38.416075 | orchestrator | 23:35:38.416 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:35:38.416104 | orchestrator | 23:35:38.416 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:35:38.416621 | orchestrator | 23:35:38.416 STDOUT terraform:  + i 2025-03-10 23:35:38.416630 | orchestrator | 23:35:38.416 STDOUT terraform: d = (known after apply) 2025-03-10 23:35:38.416641 | orchestrator | 23:35:38.416 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:35:38.416679 | orchestrator | 23:35:38.416 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.416687 | orchestrator | 23:35:38.416 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:35:38.416706 | orchestrator | 23:35:38.416 STDOUT terraform:  } 2025-03-10 23:35:38.416770 | orchestrator | 23:35:38.416 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-10 23:35:38.416828 | orchestrator | 23:35:38.416 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-10 23:35:38.416854 | orchestrator | 23:35:38.416 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-10 23:35:38.416879 | orchestrator | 23:35:38.416 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-10 23:35:38.416908 | orchestrator | 23:35:38.416 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.416951 | orchestrator | 23:35:38.416 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:35:38.416982 | orchestrator | 23:35:38.416 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.417037 | orchestrator | 23:35:38.416 STDOUT terraform:  } 2025-03-10 23:35:38.417043 | orchestrator | 23:35:38.416 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-10 23:35:38.417086 | orchestrator | 23:35:38.417 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-10 23:35:38.417107 | orchestrator | 23:35:38.417 STDOUT terraform:  + address = (known after apply) 2025-03-10 23:35:38.417133 | orchestrator | 23:35:38.417 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.417154 | orchestrator | 23:35:38.417 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-10 23:35:38.417174 | orchestrator | 23:35:38.417 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.417201 | orchestrator | 23:35:38.417 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-10 23:35:38.417227 | orchestrator | 23:35:38.417 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.417234 | orchestrator | 23:35:38.417 STDOUT terraform:  + pool = "public" 2025-03-10 23:35:38.417267 | orchestrator | 23:35:38.417 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:35:38.417287 | orchestrator | 23:35:38.417 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.417312 | orchestrator | 23:35:38.417 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.417333 | orchestrator | 23:35:38.417 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.417391 | orchestrator | 23:35:38.417 STDOUT terraform:  } 2025-03-10 23:35:38.417398 | orchestrator | 23:35:38.417 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-10 23:35:38.417450 | orchestrator | 23:35:38.417 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-10 23:35:38.417487 | orchestrator | 23:35:38.417 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.417525 | orchestrator | 23:35:38.417 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.417546 | orchestrator | 23:35:38.417 STDOUT terraform:  + availability_zone_hints = [ 2025-03-10 23:35:38.417564 | orchestrator | 23:35:38.417 STDOUT terraform:  + "nova", 2025-03-10 23:35:38.417572 | orchestrator | 23:35:38.417 STDOUT terraform:  ] 2025-03-10 23:35:38.417603 | orchestrator | 23:35:38.417 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-10 23:35:38.417639 | orchestrator | 23:35:38.417 STDOUT terraform:  + external = (known after apply) 2025-03-10 23:35:38.417677 | orchestrator | 23:35:38.417 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.417713 | orchestrator | 23:35:38.417 STDOUT terraform:  + mtu = (known after apply) 2025-03-10 23:35:38.417752 | orchestrator | 23:35:38.417 STDOUT terraform:  + name = "net-testbed-management" 2025-03-10 23:35:38.417787 | orchestrator | 23:35:38.417 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.417822 | orchestrator | 23:35:38.417 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.417859 | orchestrator | 23:35:38.417 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.417897 | orchestrator | 23:35:38.417 STDOUT terraform:  + shared = (known after apply) 2025-03-10 23:35:38.417932 | orchestrator | 23:35:38.417 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.417966 | orchestrator | 23:35:38.417 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-10 23:35:38.417988 | orchestrator | 23:35:38.417 STDOUT terraform:  + segments (known after apply) 2025-03-10 23:35:38.418067 | orchestrator | 23:35:38.417 STDOUT terraform:  } 2025-03-10 23:35:38.418076 | orchestrator | 23:35:38.417 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-10 23:35:38.418107 | orchestrator | 23:35:38.418 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-10 23:35:38.418143 | orchestrator | 23:35:38.418 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.418178 | orchestrator | 23:35:38.418 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.418213 | orchestrator | 23:35:38.418 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.418250 | orchestrator | 23:35:38.418 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.418285 | orchestrator | 23:35:38.418 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.418320 | orchestrator | 23:35:38.418 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.418355 | orchestrator | 23:35:38.418 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.418399 | orchestrator | 23:35:38.418 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.418435 | orchestrator | 23:35:38.418 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.418472 | orchestrator | 23:35:38.418 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.418508 | orchestrator | 23:35:38.418 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.418542 | orchestrator | 23:35:38.418 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.418576 | orchestrator | 23:35:38.418 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.418612 | orchestrator | 23:35:38.418 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.418647 | orchestrator | 23:35:38.418 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.418682 | orchestrator | 23:35:38.418 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.418689 | orchestrator | 23:35:38.418 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.418726 | orchestrator | 23:35:38.418 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.418754 | orchestrator | 23:35:38.418 STDOUT terraform:  } 2025-03-10 23:35:38.418761 | orchestrator | 23:35:38.418 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.418781 | orchestrator | 23:35:38.418 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.418812 | orchestrator | 23:35:38.418 STDOUT terraform:  } 2025-03-10 23:35:38.418820 | orchestrator | 23:35:38.418 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.418848 | orchestrator | 23:35:38.418 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.418855 | orchestrator | 23:35:38.418 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-10 23:35:38.418877 | orchestrator | 23:35:38.418 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.418888 | orchestrator | 23:35:38.418 STDOUT terraform:  } 2025-03-10 23:35:38.418894 | orchestrator | 23:35:38.418 STDOUT terraform:  } 2025-03-10 23:35:38.418941 | orchestrator | 23:35:38.418 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-10 23:35:38.418986 | orchestrator | 23:35:38.418 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.419021 | orchestrator | 23:35:38.418 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.419056 | orchestrator | 23:35:38.419 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.419089 | orchestrator | 23:35:38.419 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.419126 | orchestrator | 23:35:38.419 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.419161 | orchestrator | 23:35:38.419 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.419197 | orchestrator | 23:35:38.419 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.419232 | orchestrator | 23:35:38.419 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.419269 | orchestrator | 23:35:38.419 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.419305 | orchestrator | 23:35:38.419 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.419339 | orchestrator | 23:35:38.419 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.419375 | orchestrator | 23:35:38.419 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.419414 | orchestrator | 23:35:38.419 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.419448 | orchestrator | 23:35:38.419 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.419483 | orchestrator | 23:35:38.419 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.419521 | orchestrator | 23:35:38.419 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.419557 | orchestrator | 23:35:38.419 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.419565 | orchestrator | 23:35:38.419 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.419601 | orchestrator | 23:35:38.419 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.419628 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419635 | orchestrator | 23:35:38.419 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.419662 | orchestrator | 23:35:38.419 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.419689 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419696 | orchestrator | 23:35:38.419 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.419716 | orchestrator | 23:35:38.419 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.419744 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419751 | orchestrator | 23:35:38.419 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.419770 | orchestrator | 23:35:38.419 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.419801 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419810 | orchestrator | 23:35:38.419 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.419835 | orchestrator | 23:35:38.419 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.419842 | orchestrator | 23:35:38.419 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-10 23:35:38.419862 | orchestrator | 23:35:38.419 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.419881 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419888 | orchestrator | 23:35:38.419 STDOUT terraform:  } 2025-03-10 23:35:38.419926 | orchestrator | 23:35:38.419 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-10 23:35:38.419971 | orchestrator | 23:35:38.419 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.420007 | orchestrator | 23:35:38.419 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.420041 | orchestrator | 23:35:38.419 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.420076 | orchestrator | 23:35:38.420 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.420114 | orchestrator | 23:35:38.420 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.420148 | orchestrator | 23:35:38.420 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.420184 | orchestrator | 23:35:38.420 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.420219 | orchestrator | 23:35:38.420 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.420254 | orchestrator | 23:35:38.420 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.420290 | orchestrator | 23:35:38.420 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.420326 | orchestrator | 23:35:38.420 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.425465 | orchestrator | 23:35:38.420 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.425534 | orchestrator | 23:35:38.420 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.425551 | orchestrator | 23:35:38.420 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.425564 | orchestrator | 23:35:38.420 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.425577 | orchestrator | 23:35:38.420 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.425589 | orchestrator | 23:35:38.420 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.425602 | orchestrator | 23:35:38.420 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.425615 | orchestrator | 23:35:38.420 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.425628 | orchestrator | 23:35:38.420 STDOUT terraform:  } 2025-03-10 23:35:38.425641 | orchestrator | 23:35:38.420 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.425653 | orchestrator | 23:35:38.420 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.425666 | orchestrator | 23:35:38.420 STDOUT terraform:  } 2025-03-10 23:35:38.425679 | orchestrator | 23:35:38.420 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.425692 | orchestrator | 23:35:38.420 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.425704 | orchestrator | 23:35:38.420 STDOUT terraform:  } 2025-03-10 23:35:38.425717 | orchestrator | 23:35:38.420 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.425729 | orchestrator | 23:35:38.420 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.425742 | orchestrator | 23:35:38.420 STDOUT terraform:  } 2025-03-10 23:35:38.425754 | orchestrator | 23:35:38.420 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.425767 | orchestrator | 23:35:38.421 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.425780 | orchestrator | 23:35:38.421 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-10 23:35:38.425792 | orchestrator | 23:35:38.421 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.425804 | orchestrator | 23:35:38.421 STDOUT terraform:  } 2025-03-10 23:35:38.425817 | orchestrator | 23:35:38.421 STDOUT terraform:  } 2025-03-10 23:35:38.425829 | orchestrator | 23:35:38.421 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-10 23:35:38.425843 | orchestrator | 23:35:38.421 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.425871 | orchestrator | 23:35:38.421 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.425884 | orchestrator | 23:35:38.421 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.425897 | orchestrator | 23:35:38.421 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.425909 | orchestrator | 23:35:38.421 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.425922 | orchestrator | 23:35:38.421 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.425935 | orchestrator | 23:35:38.421 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.425948 | orchestrator | 23:35:38.421 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.425963 | orchestrator | 23:35:38.421 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.425975 | orchestrator | 23:35:38.421 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.425988 | orchestrator | 23:35:38.421 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.426001 | orchestrator | 23:35:38.421 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.426057 | orchestrator | 23:35:38.421 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.426086 | orchestrator | 23:35:38.421 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.426107 | orchestrator | 23:35:38.421 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.426121 | orchestrator | 23:35:38.421 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.426134 | orchestrator | 23:35:38.421 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.426146 | orchestrator | 23:35:38.421 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426159 | orchestrator | 23:35:38.421 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.426172 | orchestrator | 23:35:38.421 STDOUT terraform:  } 2025-03-10 23:35:38.426184 | orchestrator | 23:35:38.421 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426197 | orchestrator | 23:35:38.421 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.426209 | orchestrator | 23:35:38.421 STDOUT terraform:  } 2025-03-10 23:35:38.426222 | orchestrator | 23:35:38.421 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426235 | orchestrator | 23:35:38.421 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.426247 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426260 | orchestrator | 23:35:38.422 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426272 | orchestrator | 23:35:38.422 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.426284 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426297 | orchestrator | 23:35:38.422 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.426309 | orchestrator | 23:35:38.422 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.426329 | orchestrator | 23:35:38.422 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-10 23:35:38.426342 | orchestrator | 23:35:38.422 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.426354 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426367 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426430 | orchestrator | 23:35:38.422 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-10 23:35:38.426445 | orchestrator | 23:35:38.422 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.426458 | orchestrator | 23:35:38.422 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.426470 | orchestrator | 23:35:38.422 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.426483 | orchestrator | 23:35:38.422 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.426495 | orchestrator | 23:35:38.422 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.426508 | orchestrator | 23:35:38.422 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.426521 | orchestrator | 23:35:38.422 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.426533 | orchestrator | 23:35:38.422 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.426546 | orchestrator | 23:35:38.422 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.426558 | orchestrator | 23:35:38.422 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.426571 | orchestrator | 23:35:38.422 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.426583 | orchestrator | 23:35:38.422 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.426596 | orchestrator | 23:35:38.422 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.426608 | orchestrator | 23:35:38.422 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.426621 | orchestrator | 23:35:38.422 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.426634 | orchestrator | 23:35:38.422 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.426657 | orchestrator | 23:35:38.422 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.426670 | orchestrator | 23:35:38.422 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426683 | orchestrator | 23:35:38.422 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.426696 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426709 | orchestrator | 23:35:38.422 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426721 | orchestrator | 23:35:38.422 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.426734 | orchestrator | 23:35:38.422 STDOUT terraform:  } 2025-03-10 23:35:38.426752 | orchestrator | 23:35:38.422 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426765 | orchestrator | 23:35:38.423 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.426784 | orchestrator | 23:35:38.423 STDOUT terraform:  } 2025-03-10 23:35:38.426797 | orchestrator | 23:35:38.423 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.426809 | orchestrator | 23:35:38.423 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.426822 | orchestrator | 23:35:38.423 STDOUT terraform:  } 2025-03-10 23:35:38.426834 | orchestrator | 23:35:38.423 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.426847 | orchestrator | 23:35:38.423 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.426860 | orchestrator | 23:35:38.423 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-10 23:35:38.426872 | orchestrator | 23:35:38.423 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.426885 | orchestrator | 23:35:38.423 STDOUT terraform:  } 2025-03-10 23:35:38.426901 | orchestrator | 23:35:38.423 STDOUT terraform:  } 2025-03-10 23:35:38.426914 | orchestrator | 23:35:38.423 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-10 23:35:38.426927 | orchestrator | 23:35:38.423 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.426940 | orchestrator | 23:35:38.423 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.426952 | orchestrator | 23:35:38.423 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.426965 | orchestrator | 23:35:38.423 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.426977 | orchestrator | 23:35:38.423 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.426989 | orchestrator | 23:35:38.423 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.427002 | orchestrator | 23:35:38.423 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.427015 | orchestrator | 23:35:38.423 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.427066 | orchestrator | 23:35:38.423 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.427080 | orchestrator | 23:35:38.423 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.427093 | orchestrator | 23:35:38.423 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.427105 | orchestrator | 23:35:38.423 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.427118 | orchestrator | 23:35:38.423 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.427130 | orchestrator | 23:35:38.423 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.427143 | orchestrator | 23:35:38.423 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.427155 | orchestrator | 23:35:38.423 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.427168 | orchestrator | 23:35:38.423 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.427181 | orchestrator | 23:35:38.423 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427194 | orchestrator | 23:35:38.423 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.427225 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427239 | orchestrator | 23:35:38.424 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427252 | orchestrator | 23:35:38.424 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.427265 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427278 | orchestrator | 23:35:38.424 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427290 | orchestrator | 23:35:38.424 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.427303 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427316 | orchestrator | 23:35:38.424 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427328 | orchestrator | 23:35:38.424 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.427341 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427354 | orchestrator | 23:35:38.424 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.427366 | orchestrator | 23:35:38.424 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.427394 | orchestrator | 23:35:38.424 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-10 23:35:38.427407 | orchestrator | 23:35:38.424 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.427420 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427432 | orchestrator | 23:35:38.424 STDOUT terraform:  } 2025-03-10 23:35:38.427445 | orchestrator | 23:35:38.424 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-10 23:35:38.427458 | orchestrator | 23:35:38.424 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:35:38.427471 | orchestrator | 23:35:38.424 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.427483 | orchestrator | 23:35:38.424 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:35:38.427496 | orchestrator | 23:35:38.424 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:35:38.427508 | orchestrator | 23:35:38.424 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.427521 | orchestrator | 23:35:38.424 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:35:38.427534 | orchestrator | 23:35:38.424 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:35:38.427546 | orchestrator | 23:35:38.424 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:35:38.427561 | orchestrator | 23:35:38.424 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:35:38.427574 | orchestrator | 23:35:38.424 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.427586 | orchestrator | 23:35:38.424 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:35:38.427599 | orchestrator | 23:35:38.424 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.427612 | orchestrator | 23:35:38.424 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:35:38.427624 | orchestrator | 23:35:38.424 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:35:38.427642 | orchestrator | 23:35:38.424 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.427655 | orchestrator | 23:35:38.424 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:35:38.427668 | orchestrator | 23:35:38.424 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.427680 | orchestrator | 23:35:38.424 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427697 | orchestrator | 23:35:38.424 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:35:38.427711 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427724 | orchestrator | 23:35:38.425 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427736 | orchestrator | 23:35:38.425 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:35:38.427754 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427767 | orchestrator | 23:35:38.425 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427780 | orchestrator | 23:35:38.425 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:35:38.427792 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427804 | orchestrator | 23:35:38.425 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:35:38.427816 | orchestrator | 23:35:38.425 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:35:38.427829 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427841 | orchestrator | 23:35:38.425 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:35:38.427854 | orchestrator | 23:35:38.425 STDOUT terraform:  + fixed_ip { 2025-03-10 23:35:38.427867 | orchestrator | 23:35:38.425 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-10 23:35:38.427884 | orchestrator | 23:35:38.425 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.427897 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427909 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.427922 | orchestrator | 23:35:38.425 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-10 23:35:38.427939 | orchestrator | 23:35:38.425 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-10 23:35:38.427952 | orchestrator | 23:35:38.425 STDOUT terraform:  + force_destroy = false 2025-03-10 23:35:38.427964 | orchestrator | 23:35:38.425 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.427977 | orchestrator | 23:35:38.425 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:35:38.427990 | orchestrator | 23:35:38.425 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428002 | orchestrator | 23:35:38.425 STDOUT terraform:  + router_id = (known after apply) 2025-03-10 23:35:38.428015 | orchestrator | 23:35:38.425 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:35:38.428027 | orchestrator | 23:35:38.425 STDOUT terraform:  } 2025-03-10 23:35:38.428040 | orchestrator | 23:35:38.425 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-10 23:35:38.428063 | orchestrator | 23:35:38.425 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-10 23:35:38.428076 | orchestrator | 23:35:38.425 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:35:38.428089 | orchestrator | 23:35:38.425 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.428101 | orchestrator | 23:35:38.425 STDOUT terraform:  + availability_zone_hints = [ 2025-03-10 23:35:38.428114 | orchestrator | 23:35:38.425 STDOUT terraform:  + "nova", 2025-03-10 23:35:38.428127 | orchestrator | 23:35:38.426 STDOUT terraform:  ] 2025-03-10 23:35:38.428139 | orchestrator | 23:35:38.426 STDOUT terraform:  + distributed = (known after apply) 2025-03-10 23:35:38.428152 | orchestrator | 23:35:38.426 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-10 23:35:38.428164 | orchestrator | 23:35:38.426 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-10 23:35:38.428177 | orchestrator | 23:35:38.426 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.428189 | orchestrator | 23:35:38.426 STDOUT terraform:  + name = "testbed" 2025-03-10 23:35:38.428202 | orchestrator | 23:35:38.426 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428215 | orchestrator | 23:35:38.426 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.428227 | orchestrator | 23:35:38.426 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-10 23:35:38.428240 | orchestrator | 23:35:38.426 STDOUT terraform:  } 2025-03-10 23:35:38.428253 | orchestrator | 23:35:38.426 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-10 23:35:38.428272 | orchestrator | 23:35:38.426 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-10 23:35:38.428285 | orchestrator | 23:35:38.426 STDOUT terraform:  + description = "ssh" 2025-03-10 23:35:38.428297 | orchestrator | 23:35:38.426 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.428310 | orchestrator | 23:35:38.426 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.428322 | orchestrator | 23:35:38.426 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.428335 | orchestrator | 23:35:38.426 STDOUT terraform:  + port_range_max = 22 2025-03-10 23:35:38.428348 | orchestrator | 23:35:38.426 STDOUT terraform:  + port_range_min = 22 2025-03-10 23:35:38.428362 | orchestrator | 23:35:38.426 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:35:38.428389 | orchestrator | 23:35:38.426 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428402 | orchestrator | 23:35:38.426 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.428415 | orchestrator | 23:35:38.426 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.428427 | orchestrator | 23:35:38.426 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.428440 | orchestrator | 23:35:38.426 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.428452 | orchestrator | 23:35:38.426 STDOUT terraform:  } 2025-03-10 23:35:38.428471 | orchestrator | 23:35:38.426 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-10 23:35:38.428483 | orchestrator | 23:35:38.426 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-10 23:35:38.428496 | orchestrator | 23:35:38.427 STDOUT terraform:  + description = "wireguard" 2025-03-10 23:35:38.428509 | orchestrator | 23:35:38.427 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.428521 | orchestrator | 23:35:38.427 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.428534 | orchestrator | 23:35:38.427 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.428546 | orchestrator | 23:35:38.427 STDOUT terraform:  + port_range_max = 51820 2025-03-10 23:35:38.428559 | orchestrator | 23:35:38.427 STDOUT terraform:  + port_range_min = 51820 2025-03-10 23:35:38.428572 | orchestrator | 23:35:38.427 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:35:38.428584 | orchestrator | 23:35:38.427 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428597 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.428609 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.428622 | orchestrator | 23:35:38.427 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.428635 | orchestrator | 23:35:38.427 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.428647 | orchestrator | 23:35:38.427 STDOUT terraform:  } 2025-03-10 23:35:38.428660 | orchestrator | 23:35:38.427 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-10 23:35:38.428672 | orchestrator | 23:35:38.427 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-10 23:35:38.428685 | orchestrator | 23:35:38.427 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.428698 | orchestrator | 23:35:38.427 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.428711 | orchestrator | 23:35:38.427 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.428723 | orchestrator | 23:35:38.427 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:35:38.428735 | orchestrator | 23:35:38.427 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428748 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.428765 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-10 23:35:38.428778 | orchestrator | 23:35:38.427 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.428790 | orchestrator | 23:35:38.427 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.428803 | orchestrator | 23:35:38.427 STDOUT terraform:  } 2025-03-10 23:35:38.428815 | orchestrator | 23:35:38.427 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-10 23:35:38.428828 | orchestrator | 23:35:38.427 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-10 23:35:38.428846 | orchestrator | 23:35:38.427 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.428858 | orchestrator | 23:35:38.427 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.428871 | orchestrator | 23:35:38.427 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.428884 | orchestrator | 23:35:38.427 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:35:38.428900 | orchestrator | 23:35:38.427 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.428913 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.428929 | orchestrator | 23:35:38.427 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-10 23:35:38.428942 | orchestrator | 23:35:38.427 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.428954 | orchestrator | 23:35:38.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.428967 | orchestrator | 23:35:38.428 STDOUT terraform:  } 2025-03-10 23:35:38.428979 | orchestrator | 23:35:38.428 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-10 23:35:38.428992 | orchestrator | 23:35:38.428 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-10 23:35:38.429010 | orchestrator | 23:35:38.428 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.429023 | orchestrator | 23:35:38.428 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.429035 | orchestrator | 23:35:38.428 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429048 | orchestrator | 23:35:38.428 STDOUT terraform:  + protocol = "icmp" 2025-03-10 23:35:38.429060 | orchestrator | 23:35:38.428 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429073 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.429085 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.429098 | orchestrator | 23:35:38.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.429110 | orchestrator | 23:35:38.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429122 | orchestrator | 23:35:38.428 STDOUT terraform:  } 2025-03-10 23:35:38.429135 | orchestrator | 23:35:38.428 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-10 23:35:38.429148 | orchestrator | 23:35:38.428 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-10 23:35:38.429161 | orchestrator | 23:35:38.428 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.429173 | orchestrator | 23:35:38.428 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.429186 | orchestrator | 23:35:38.428 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429199 | orchestrator | 23:35:38.428 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:35:38.429211 | orchestrator | 23:35:38.428 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429230 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.429243 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.429260 | orchestrator | 23:35:38.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.429273 | orchestrator | 23:35:38.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429285 | orchestrator | 23:35:38.428 STDOUT terraform:  } 2025-03-10 23:35:38.429298 | orchestrator | 23:35:38.428 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-10 23:35:38.429311 | orchestrator | 23:35:38.428 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-10 23:35:38.429324 | orchestrator | 23:35:38.428 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.429336 | orchestrator | 23:35:38.428 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.429349 | orchestrator | 23:35:38.428 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429361 | orchestrator | 23:35:38.428 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:35:38.429374 | orchestrator | 23:35:38.428 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429421 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.429435 | orchestrator | 23:35:38.428 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.429447 | orchestrator | 23:35:38.428 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.429460 | orchestrator | 23:35:38.428 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429473 | orchestrator | 23:35:38.429 STDOUT terraform:  } 2025-03-10 23:35:38.429486 | orchestrator | 23:35:38.429 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-10 23:35:38.429499 | orchestrator | 23:35:38.429 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-10 23:35:38.429511 | orchestrator | 23:35:38.429 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.429524 | orchestrator | 23:35:38.429 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.429536 | orchestrator | 23:35:38.429 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429549 | orchestrator | 23:35:38.429 STDOUT terraform:  + protocol = "icmp" 2025-03-10 23:35:38.429561 | orchestrator | 23:35:38.429 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429574 | orchestrator | 23:35:38.429 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.429591 | orchestrator | 23:35:38.429 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.429604 | orchestrator | 23:35:38.429 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.429616 | orchestrator | 23:35:38.429 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429629 | orchestrator | 23:35:38.429 STDOUT terraform:  } 2025-03-10 23:35:38.429642 | orchestrator | 23:35:38.429 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-10 23:35:38.429721 | orchestrator | 23:35:38.429 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-10 23:35:38.429735 | orchestrator | 23:35:38.429 STDOUT terraform:  + description = "vrrp" 2025-03-10 23:35:38.429747 | orchestrator | 23:35:38.429 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:35:38.429760 | orchestrator | 23:35:38.429 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:35:38.429772 | orchestrator | 23:35:38.429 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429785 | orchestrator | 23:35:38.429 STDOUT terraform:  + protocol = "112" 2025-03-10 23:35:38.429798 | orchestrator | 23:35:38.429 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429815 | orchestrator | 23:35:38.429 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:35:38.429832 | orchestrator | 23:35:38.429 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:35:38.429845 | orchestrator | 23:35:38.429 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:35:38.429858 | orchestrator | 23:35:38.429 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429870 | orchestrator | 23:35:38.429 STDOUT terraform:  } 2025-03-10 23:35:38.429883 | orchestrator | 23:35:38.429 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-10 23:35:38.429895 | orchestrator | 23:35:38.429 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-10 23:35:38.429908 | orchestrator | 23:35:38.429 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.429921 | orchestrator | 23:35:38.429 STDOUT terraform:  + description = "management security group" 2025-03-10 23:35:38.429937 | orchestrator | 23:35:38.429 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.429947 | orchestrator | 23:35:38.429 STDOUT terraform:  + name = "testbed-management" 2025-03-10 23:35:38.429957 | orchestrator | 23:35:38.429 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.429968 | orchestrator | 23:35:38.429 STDOUT terraform:  + stateful = (known after apply) 2025-03-10 23:35:38.429978 | orchestrator | 23:35:38.429 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.429991 | orchestrator | 23:35:38.429 STDOUT terraform:  } 2025-03-10 23:35:38.430065 | orchestrator | 23:35:38.429 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-10 23:35:38.430083 | orchestrator | 23:35:38.429 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-10 23:35:38.430110 | orchestrator | 23:35:38.430 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.430124 | orchestrator | 23:35:38.430 STDOUT terraform:  + description = "node security group" 2025-03-10 23:35:38.430137 | orchestrator | 23:35:38.430 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.430161 | orchestrator | 23:35:38.430 STDOUT terraform:  + name = "testbed-node" 2025-03-10 23:35:38.430197 | orchestrator | 23:35:38.430 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.430218 | orchestrator | 23:35:38.430 STDOUT terraform:  + stateful = (known after apply) 2025-03-10 23:35:38.430249 | orchestrator | 23:35:38.430 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.430321 | orchestrator | 23:35:38.430 STDOUT terraform:  } 2025-03-10 23:35:38.430338 | orchestrator | 23:35:38.430 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-10 23:35:38.430372 | orchestrator | 23:35:38.430 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-10 23:35:38.430401 | orchestrator | 23:35:38.430 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:35:38.430413 | orchestrator | 23:35:38.430 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-10 23:35:38.430425 | orchestrator | 23:35:38.430 STDOUT terraform:  + dns_nameservers = [ 2025-03-10 23:35:38.430448 | orchestrator | 23:35:38.430 STDOUT terraform:  + "8.8.8.8", 2025-03-10 23:35:38.430463 | orchestrator | 23:35:38.430 STDOUT terraform:  + "9.9.9.9", 2025-03-10 23:35:38.430474 | orchestrator | 23:35:38.430 STDOUT terraform:  ] 2025-03-10 23:35:38.430486 | orchestrator | 23:35:38.430 STDOUT terraform:  + enable_dhcp = true 2025-03-10 23:35:38.430499 | orchestrator | 23:35:38.430 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-10 23:35:38.430538 | orchestrator | 23:35:38.430 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.430552 | orchestrator | 23:35:38.430 STDOUT terraform:  + ip_version = 4 2025-03-10 23:35:38.430586 | orchestrator | 23:35:38.430 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-10 23:35:38.430610 | orchestrator | 23:35:38.430 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-10 23:35:38.430652 | orchestrator | 23:35:38.430 STDOUT terraform:  + name = "subnet-testbed-management" 2025-03-10 23:35:38.430684 | orchestrator | 23:35:38.430 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:35:38.430698 | orchestrator | 23:35:38.430 STDOUT terraform:  + no_gateway = false 2025-03-10 23:35:38.430730 | orchestrator | 23:35:38.430 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:35:38.430762 | orchestrator | 23:35:38.430 STDOUT terraform:  + service_types = (known after apply) 2025-03-10 23:35:38.430788 | orchestrator | 23:35:38.430 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:35:38.430802 | orchestrator | 23:35:38.430 STDOUT terraform:  + allocation_pool { 2025-03-10 23:35:38.430827 | orchestrator | 23:35:38.430 STDOUT terraform:  + end = "192.168.31.250" 2025-03-10 23:35:38.430853 | orchestrator | 23:35:38.430 STDOUT terraform:  + start = "192.168.31.200" 2025-03-10 23:35:38.430866 | orchestrator | 23:35:38.430 STDOUT terraform:  } 2025-03-10 23:35:38.430898 | orchestrator | 23:35:38.430 STDOUT terraform:  } 2025-03-10 23:35:38.430912 | orchestrator | 23:35:38.430 STDOUT terraform:  # terraform_data.image will be created 2025-03-10 23:35:38.430943 | orchestrator | 23:35:38.430 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-10 23:35:38.430957 | orchestrator | 23:35:38.430 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.430974 | orchestrator | 23:35:38.430 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-10 23:35:38.430987 | orchestrator | 23:35:38.430 STDOUT terraform:  + output = (known after apply) 2025-03-10 23:35:38.431017 | orchestrator | 23:35:38.430 STDOUT terraform:  } 2025-03-10 23:35:38.431031 | orchestrator | 23:35:38.430 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-10 23:35:38.431044 | orchestrator | 23:35:38.431 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-10 23:35:38.431057 | orchestrator | 23:35:38.431 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:35:38.431088 | orchestrator | 23:35:38.431 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-10 23:35:38.431102 | orchestrator | 23:35:38.431 STDOUT terraform:  + output = (known after apply) 2025-03-10 23:35:38.431115 | orchestrator | 23:35:38.431 STDOUT terraform:  } 2025-03-10 23:35:38.431146 | orchestrator | 23:35:38.431 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-10 23:35:38.431202 | orchestrator | 23:35:38.431 STDOUT terraform: Changes to Outputs: 2025-03-10 23:35:38.431218 | orchestrator | 23:35:38.431 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-10 23:35:38.613986 | orchestrator | 23:35:38.431 STDOUT terraform:  + private_key = (sensitive value) 2025-03-10 23:35:38.614077 | orchestrator | 23:35:38.613 STDOUT terraform: terraform_data.image: Creating... 2025-03-10 23:35:38.614550 | orchestrator | 23:35:38.614 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-10 23:35:38.615139 | orchestrator | 23:35:38.614 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b882f20c-dd44-5e8b-c84e-368ed6a53fa9] 2025-03-10 23:35:38.629576 | orchestrator | 23:35:38.615 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=64d48075-d37e-f091-b7b6-18d7cdeff01b] 2025-03-10 23:35:38.629618 | orchestrator | 23:35:38.629 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-10 23:35:38.638260 | orchestrator | 23:35:38.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-10 23:35:38.638689 | orchestrator | 23:35:38.637 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-10 23:35:38.638709 | orchestrator | 23:35:38.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-10 23:35:38.638716 | orchestrator | 23:35:38.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-10 23:35:38.638721 | orchestrator | 23:35:38.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-10 23:35:38.638727 | orchestrator | 23:35:38.638 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-10 23:35:38.638733 | orchestrator | 23:35:38.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-10 23:35:38.638741 | orchestrator | 23:35:38.638 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-10 23:35:38.641781 | orchestrator | 23:35:38.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-10 23:35:39.096078 | orchestrator | 23:35:39.095 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-10 23:35:39.102179 | orchestrator | 23:35:39.101 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-10 23:35:39.103256 | orchestrator | 23:35:39.103 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-10 23:35:39.113243 | orchestrator | 23:35:39.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-10 23:35:39.478733 | orchestrator | 23:35:39.478 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-03-10 23:35:39.486805 | orchestrator | 23:35:39.486 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-10 23:35:44.489704 | orchestrator | 23:35:44.489 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 5s [id=1e5417a3-9fb9-4a4b-9d53-ff66be7ceb04] 2025-03-10 23:35:44.497560 | orchestrator | 23:35:44.497 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-10 23:35:48.635594 | orchestrator | 23:35:48.635 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-10 23:35:48.638658 | orchestrator | 23:35:48.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-10 23:35:48.638802 | orchestrator | 23:35:48.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-10 23:35:48.638962 | orchestrator | 23:35:48.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-10 23:35:48.639680 | orchestrator | 23:35:48.639 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-10 23:35:48.643096 | orchestrator | 23:35:48.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-10 23:35:49.103159 | orchestrator | 23:35:49.102 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-10 23:35:49.114277 | orchestrator | 23:35:49.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-10 23:35:49.199501 | orchestrator | 23:35:49.199 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=ed3d5c7a-4300-47cf-88fa-db7e232461c4] 2025-03-10 23:35:49.201882 | orchestrator | 23:35:49.201 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=4a50e71d-fbe2-4470-bd50-185934b47889] 2025-03-10 23:35:49.207285 | orchestrator | 23:35:49.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-10 23:35:49.208305 | orchestrator | 23:35:49.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-10 23:35:49.213150 | orchestrator | 23:35:49.212 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=6320f54b-3b67-4e1d-9431-2e1a5be0b8d0] 2025-03-10 23:35:49.219878 | orchestrator | 23:35:49.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-10 23:35:49.232116 | orchestrator | 23:35:49.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=4dc4161b-8ed1-4e64-9782-2a846a023c92] 2025-03-10 23:35:49.236451 | orchestrator | 23:35:49.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-10 23:35:49.248152 | orchestrator | 23:35:49.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5] 2025-03-10 23:35:49.253288 | orchestrator | 23:35:49.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-10 23:35:49.263304 | orchestrator | 23:35:49.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=53d17819-fe6c-46d4-9ebd-f2e48ea5e4aa] 2025-03-10 23:35:49.277349 | orchestrator | 23:35:49.277 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-10 23:35:49.311952 | orchestrator | 23:35:49.311 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=2ca5ef8e-9fe9-400b-8f24-d393273052c7] 2025-03-10 23:35:49.320141 | orchestrator | 23:35:49.319 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-10 23:35:49.332546 | orchestrator | 23:35:49.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=737c5522-721d-4561-a9d1-64ed72a6949f] 2025-03-10 23:35:49.337778 | orchestrator | 23:35:49.337 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-10 23:35:49.488309 | orchestrator | 23:35:49.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-10 23:35:49.691988 | orchestrator | 23:35:49.691 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=8e40192d-c923-4cb6-82dc-fc15c778e98b] 2025-03-10 23:35:49.702852 | orchestrator | 23:35:49.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-10 23:35:54.499707 | orchestrator | 23:35:54.499 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-10 23:35:54.660977 | orchestrator | 23:35:54.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=a7a63586-9264-4746-b73e-29ce5d541c43] 2025-03-10 23:35:54.669950 | orchestrator | 23:35:54.669 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-10 23:35:59.207861 | orchestrator | 23:35:59.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:35:59.208838 | orchestrator | 23:35:59.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-10 23:35:59.221074 | orchestrator | 23:35:59.220 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-10 23:35:59.237498 | orchestrator | 23:35:59.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-10 23:35:59.253764 | orchestrator | 23:35:59.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-10 23:35:59.279141 | orchestrator | 23:35:59.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-10 23:35:59.320427 | orchestrator | 23:35:59.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-10 23:35:59.338748 | orchestrator | 23:35:59.338 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-10 23:35:59.378049 | orchestrator | 23:35:59.377 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=58a7266a-08ba-4e1a-9dfb-53f2a81a2be7] 2025-03-10 23:35:59.393414 | orchestrator | 23:35:59.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-10 23:35:59.402002 | orchestrator | 23:35:59.401 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=33355daa-b941-47f4-b8bd-5e6f19c9fbdc] 2025-03-10 23:35:59.421646 | orchestrator | 23:35:59.421 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-10 23:35:59.422723 | orchestrator | 23:35:59.422 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=ef8fa17c-1885-4415-b267-a55d447b75a1] 2025-03-10 23:35:59.427261 | orchestrator | 23:35:59.427 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-10 23:35:59.443604 | orchestrator | 23:35:59.443 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=e56119bb-25c8-44e9-a07b-17c74eee4ad0] 2025-03-10 23:35:59.451905 | orchestrator | 23:35:59.451 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-10 23:35:59.467280 | orchestrator | 23:35:59.466 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=7de028b3-7e0d-4688-b625-ea2556c506ce] 2025-03-10 23:35:59.471672 | orchestrator | 23:35:59.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=a404a76d-1978-41bb-a69d-8095668152b7] 2025-03-10 23:35:59.474397 | orchestrator | 23:35:59.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-10 23:35:59.490529 | orchestrator | 23:35:59.490 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-10 23:35:59.500133 | orchestrator | 23:35:59.499 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4471afadd4e7c4e1c08bbc6b9d2ccd0394a2c91e] 2025-03-10 23:35:59.507814 | orchestrator | 23:35:59.507 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-10 23:35:59.513748 | orchestrator | 23:35:59.513 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=dd5c7437516f3d1187ad8c445ce845b3c7216229] 2025-03-10 23:35:59.518747 | orchestrator | 23:35:59.518 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-10 23:35:59.520087 | orchestrator | 23:35:59.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=64551d7b-1b37-434e-b21d-f220f4395bf2] 2025-03-10 23:35:59.522685 | orchestrator | 23:35:59.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=9bc23235-2266-4af6-bdbc-90727a536515] 2025-03-10 23:35:59.703879 | orchestrator | 23:35:59.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-10 23:36:00.033470 | orchestrator | 23:36:00.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=66c0b794-c9eb-432e-948e-a9141cffb78f] 2025-03-10 23:36:04.671062 | orchestrator | 23:36:04.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-10 23:36:04.969430 | orchestrator | 23:36:04.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=71229aa9-8ade-48b7-b965-f199404d9b59] 2025-03-10 23:36:05.186346 | orchestrator | 23:36:05.185 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=4812f3da-346b-48fb-a69b-8073938be417] 2025-03-10 23:36:05.195717 | orchestrator | 23:36:05.195 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-10 23:36:09.393970 | orchestrator | 23:36:09.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-10 23:36:09.426161 | orchestrator | 23:36:09.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:36:09.428288 | orchestrator | 23:36:09.428 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-10 23:36:09.452820 | orchestrator | 23:36:09.452 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:36:09.475144 | orchestrator | 23:36:09.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-10 23:36:09.710652 | orchestrator | 23:36:09.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=6d30e5c3-c47c-454d-b244-24ec87ab5275] 2025-03-10 23:36:09.791124 | orchestrator | 23:36:09.790 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=fcfa6566-4353-497f-91d8-2edcc85b7835] 2025-03-10 23:36:09.791946 | orchestrator | 23:36:09.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=df2716de-b955-40c5-b1c8-3fc317cacfc9] 2025-03-10 23:36:09.839055 | orchestrator | 23:36:09.838 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=e2f7c520-943c-4d28-b4ec-58621e2a24ff] 2025-03-10 23:36:09.845515 | orchestrator | 23:36:09.845 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=194aeec8-a038-4d98-ad9f-169d629e88aa] 2025-03-10 23:36:11.732911 | orchestrator | 23:36:11.732 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=80cf69b6-c754-422e-a016-08c1bc9285ef] 2025-03-10 23:36:11.740288 | orchestrator | 23:36:11.740 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-10 23:36:11.741335 | orchestrator | 23:36:11.741 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-10 23:36:11.744129 | orchestrator | 23:36:11.743 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-10 23:36:11.892881 | orchestrator | 23:36:11.892 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=19404912-a999-48f5-b3df-ac7798ad93bd] 2025-03-10 23:36:11.900705 | orchestrator | 23:36:11.900 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4ecec6fc-727d-44c5-8b33-bc01c97edfd0] 2025-03-10 23:36:11.906747 | orchestrator | 23:36:11.906 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-10 23:36:11.907257 | orchestrator | 23:36:11.907 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-10 23:36:11.911502 | orchestrator | 23:36:11.911 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-10 23:36:11.915502 | orchestrator | 23:36:11.915 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-10 23:36:11.916378 | orchestrator | 23:36:11.916 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-10 23:36:11.918111 | orchestrator | 23:36:11.917 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-10 23:36:11.918757 | orchestrator | 23:36:11.918 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-10 23:36:11.923800 | orchestrator | 23:36:11.923 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-10 23:36:11.924339 | orchestrator | 23:36:11.924 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-10 23:36:12.110221 | orchestrator | 23:36:12.109 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d9069fe0-1e6d-4f3a-a788-2f1cd547fb84] 2025-03-10 23:36:12.125293 | orchestrator | 23:36:12.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-10 23:36:12.369469 | orchestrator | 23:36:12.369 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=4f48fed3-e2b6-414d-b6e6-e509da5b0f31] 2025-03-10 23:36:12.377323 | orchestrator | 23:36:12.377 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-10 23:36:12.491659 | orchestrator | 23:36:12.491 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b983d8fc-eee8-490b-af77-ada23b09febe] 2025-03-10 23:36:12.501130 | orchestrator | 23:36:12.500 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-10 23:36:12.551542 | orchestrator | 23:36:12.551 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=e4a81761-1d0e-47a0-b71f-1b0afdecc3ea] 2025-03-10 23:36:12.557325 | orchestrator | 23:36:12.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-10 23:36:12.607507 | orchestrator | 23:36:12.607 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=7c9bd27f-f26a-4faf-9c57-c8da3365b6bf] 2025-03-10 23:36:12.614148 | orchestrator | 23:36:12.613 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-10 23:36:12.669898 | orchestrator | 23:36:12.669 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=312c9c72-04c2-4f12-8fc5-9e49f916813e] 2025-03-10 23:36:12.676443 | orchestrator | 23:36:12.676 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-10 23:36:12.778826 | orchestrator | 23:36:12.778 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=1f00b7bf-c108-4979-a5e7-a697354da3dc] 2025-03-10 23:36:12.789977 | orchestrator | 23:36:12.789 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-10 23:36:13.048579 | orchestrator | 23:36:13.048 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=eb8e05c8-7945-4919-b435-a0a214463f36] 2025-03-10 23:36:13.159825 | orchestrator | 23:36:13.159 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=e3675b9b-1001-4a2d-adcf-bf8ef208a594] 2025-03-10 23:36:17.571840 | orchestrator | 23:36:17.571 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=c791deea-c6bc-4e0a-85cd-3be1d4b3d78c] 2025-03-10 23:36:17.615922 | orchestrator | 23:36:17.615 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=b5734ee6-4680-4770-bf07-c8f0dc30f574] 2025-03-10 23:36:17.666324 | orchestrator | 23:36:17.665 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3673f77f-c40a-4761-8db8-c80b33cd541e] 2025-03-10 23:36:17.792190 | orchestrator | 23:36:17.791 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=3bea2d1d-9f70-4f20-9262-be62221bfc6d] 2025-03-10 23:36:18.019180 | orchestrator | 23:36:18.018 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=9a6747c6-a514-40b7-bac4-e7abddd476af] 2025-03-10 23:36:18.168654 | orchestrator | 23:36:18.168 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=df778e65-1dfe-436c-9042-a5f37b778f23] 2025-03-10 23:36:18.285307 | orchestrator | 23:36:18.284 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=42c14717-af58-4b05-9230-29c3be21d28d] 2025-03-10 23:36:18.603205 | orchestrator | 23:36:18.602 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=5e8a5f2f-64aa-4197-bdb6-2c1100d004cf] 2025-03-10 23:36:18.623715 | orchestrator | 23:36:18.623 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-10 23:36:18.637593 | orchestrator | 23:36:18.637 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-10 23:36:18.640557 | orchestrator | 23:36:18.640 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-10 23:36:18.641840 | orchestrator | 23:36:18.641 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-10 23:36:18.643734 | orchestrator | 23:36:18.643 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-10 23:36:18.652474 | orchestrator | 23:36:18.650 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-10 23:36:18.657580 | orchestrator | 23:36:18.657 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-10 23:36:25.047713 | orchestrator | 23:36:25.047 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=90d063b2-e50b-4262-8cef-ace878cd1831] 2025-03-10 23:36:25.072436 | orchestrator | 23:36:25.072 STDOUT terraform: local_file.inventory: Creating... 2025-03-10 23:36:25.073466 | orchestrator | 23:36:25.073 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-10 23:36:25.073630 | orchestrator | 23:36:25.073 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-10 23:36:25.080753 | orchestrator | 23:36:25.080 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de6f140b1ec6de3472550620023e434bc64168b8] 2025-03-10 23:36:25.081361 | orchestrator | 23:36:25.081 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=a0c63c501010ff3de7f4fa117ddb90153d9f89e8] 2025-03-10 23:36:25.565673 | orchestrator | 23:36:25.565 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=90d063b2-e50b-4262-8cef-ace878cd1831] 2025-03-10 23:36:28.641168 | orchestrator | 23:36:28.640 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-10 23:36:28.647096 | orchestrator | 23:36:28.646 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-10 23:36:28.647180 | orchestrator | 23:36:28.647 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-10 23:36:28.650433 | orchestrator | 23:36:28.650 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-10 23:36:28.653676 | orchestrator | 23:36:28.653 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-10 23:36:28.658945 | orchestrator | 23:36:28.658 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-10 23:36:38.641849 | orchestrator | 23:36:38.641 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-10 23:36:38.647999 | orchestrator | 23:36:38.647 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-10 23:36:38.648311 | orchestrator | 23:36:38.648 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-10 23:36:38.651359 | orchestrator | 23:36:38.651 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-10 23:36:38.654652 | orchestrator | 23:36:38.654 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-10 23:36:38.660088 | orchestrator | 23:36:38.659 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-10 23:36:38.942094 | orchestrator | 23:36:38.941 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=b2b4c49e-7f42-4a94-8e0b-aafe2cc535c9] 2025-03-10 23:36:39.092565 | orchestrator | 23:36:39.092 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=0220192e-eed7-413e-adf8-2dfa76e0b554] 2025-03-10 23:36:39.173408 | orchestrator | 23:36:39.173 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=a60c18b9-4f61-4b0e-861d-3448191e1e0b] 2025-03-10 23:36:48.643612 | orchestrator | 23:36:48.643 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-03-10 23:36:48.649774 | orchestrator | 23:36:48.649 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-03-10 23:36:48.652057 | orchestrator | 23:36:48.651 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-03-10 23:36:49.221626 | orchestrator | 23:36:49.221 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=fa5e4ec1-e7d2-4220-87c6-6f7f30be8a52] 2025-03-10 23:36:49.325061 | orchestrator | 23:36:49.324 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=513a8368-cef8-4bcf-a6fb-c06517b89b4d] 2025-03-10 23:36:51.737600 | orchestrator | 23:36:51.737 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 33s [id=f0c1a90a-478f-4e0f-8c68-250fdbeeaf89] 2025-03-10 23:36:51.747914 | orchestrator | 23:36:51.747 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-10 23:36:51.752294 | orchestrator | 23:36:51.752 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5770845807497520395] 2025-03-10 23:36:51.772987 | orchestrator | 23:36:51.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-10 23:36:51.779295 | orchestrator | 23:36:51.779 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-10 23:36:51.782894 | orchestrator | 23:36:51.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-10 23:36:51.783291 | orchestrator | 23:36:51.783 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-10 23:36:51.787210 | orchestrator | 23:36:51.787 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-10 23:36:51.790956 | orchestrator | 23:36:51.790 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-10 23:36:51.793253 | orchestrator | 23:36:51.793 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-10 23:36:51.800418 | orchestrator | 23:36:51.800 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-10 23:36:51.802641 | orchestrator | 23:36:51.802 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-10 23:36:51.826610 | orchestrator | 23:36:51.826 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-10 23:36:57.100928 | orchestrator | 23:36:57.100 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=fa5e4ec1-e7d2-4220-87c6-6f7f30be8a52/4dc4161b-8ed1-4e64-9782-2a846a023c92] 2025-03-10 23:36:57.118922 | orchestrator | 23:36:57.118 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-10 23:36:57.140560 | orchestrator | 23:36:57.140 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=a60c18b9-4f61-4b0e-861d-3448191e1e0b/9bc23235-2266-4af6-bdbc-90727a536515] 2025-03-10 23:36:57.140840 | orchestrator | 23:36:57.140 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=0220192e-eed7-413e-adf8-2dfa76e0b554/e56119bb-25c8-44e9-a07b-17c74eee4ad0] 2025-03-10 23:36:57.158373 | orchestrator | 23:36:57.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-10 23:36:57.158639 | orchestrator | 23:36:57.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-10 23:36:57.160152 | orchestrator | 23:36:57.159 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=b2b4c49e-7f42-4a94-8e0b-aafe2cc535c9/4a50e71d-fbe2-4470-bd50-185934b47889] 2025-03-10 23:36:57.169317 | orchestrator | 23:36:57.168 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=513a8368-cef8-4bcf-a6fb-c06517b89b4d/58a7266a-08ba-4e1a-9dfb-53f2a81a2be7] 2025-03-10 23:36:57.175748 | orchestrator | 23:36:57.175 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-10 23:36:57.177876 | orchestrator | 23:36:57.177 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=f0c1a90a-478f-4e0f-8c68-250fdbeeaf89/737c5522-721d-4561-a9d1-64ed72a6949f] 2025-03-10 23:36:57.180530 | orchestrator | 23:36:57.180 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=a60c18b9-4f61-4b0e-861d-3448191e1e0b/2ca5ef8e-9fe9-400b-8f24-d393273052c7] 2025-03-10 23:36:57.184894 | orchestrator | 23:36:57.184 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-10 23:36:57.190193 | orchestrator | 23:36:57.189 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=0220192e-eed7-413e-adf8-2dfa76e0b554/6320f54b-3b67-4e1d-9431-2e1a5be0b8d0] 2025-03-10 23:36:57.193866 | orchestrator | 23:36:57.193 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-10 23:36:57.195711 | orchestrator | 23:36:57.195 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-10 23:36:57.206202 | orchestrator | 23:36:57.206 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-10 23:36:57.211206 | orchestrator | 23:36:57.211 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=f0c1a90a-478f-4e0f-8c68-250fdbeeaf89/53d17819-fe6c-46d4-9ebd-f2e48ea5e4aa] 2025-03-10 23:36:57.222874 | orchestrator | 23:36:57.222 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-10 23:37:00.295392 | orchestrator | 23:37:00.294 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 8s [id=0220192e-eed7-413e-adf8-2dfa76e0b554/64551d7b-1b37-434e-b21d-f220f4395bf2] 2025-03-10 23:37:02.804194 | orchestrator | 23:37:02.803 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=fa5e4ec1-e7d2-4220-87c6-6f7f30be8a52/cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5] 2025-03-10 23:37:02.825698 | orchestrator | 23:37:02.825 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=513a8368-cef8-4bcf-a6fb-c06517b89b4d/a7a63586-9264-4746-b73e-29ce5d541c43] 2025-03-10 23:37:02.826187 | orchestrator | 23:37:02.825 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=a60c18b9-4f61-4b0e-861d-3448191e1e0b/7de028b3-7e0d-4688-b625-ea2556c506ce] 2025-03-10 23:37:02.847549 | orchestrator | 23:37:02.847 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=b2b4c49e-7f42-4a94-8e0b-aafe2cc535c9/ef8fa17c-1885-4415-b267-a55d447b75a1] 2025-03-10 23:37:02.849121 | orchestrator | 23:37:02.848 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=f0c1a90a-478f-4e0f-8c68-250fdbeeaf89/8e40192d-c923-4cb6-82dc-fc15c778e98b] 2025-03-10 23:37:02.853890 | orchestrator | 23:37:02.853 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=fa5e4ec1-e7d2-4220-87c6-6f7f30be8a52/a404a76d-1978-41bb-a69d-8095668152b7] 2025-03-10 23:37:02.867846 | orchestrator | 23:37:02.867 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=b2b4c49e-7f42-4a94-8e0b-aafe2cc535c9/ed3d5c7a-4300-47cf-88fa-db7e232461c4] 2025-03-10 23:37:02.874680 | orchestrator | 23:37:02.874 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=513a8368-cef8-4bcf-a6fb-c06517b89b4d/33355daa-b941-47f4-b8bd-5e6f19c9fbdc] 2025-03-10 23:37:07.223988 | orchestrator | 23:37:07.223 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-10 23:37:17.224544 | orchestrator | 23:37:17.224 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-10 23:37:17.906567 | orchestrator | 23:37:17.906 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b94d4bd7-b9a8-4df3-864b-f1f5b295311c] 2025-03-10 23:37:17.933158 | orchestrator | 23:37:17.932 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-10 23:37:17.933246 | orchestrator | 23:37:17.933 STDOUT terraform: Outputs: 2025-03-10 23:37:17.944635 | orchestrator | 23:37:17.933 STDOUT terraform: manager_address = 2025-03-10 23:37:17.944883 | orchestrator | 23:37:17.933 STDOUT terraform: private_key = 2025-03-10 23:37:28.181683 | orchestrator | changed 2025-03-10 23:37:28.218038 | 2025-03-10 23:37:28.218218 | TASK [Fetch manager address] 2025-03-10 23:37:28.628848 | orchestrator | ok 2025-03-10 23:37:28.639866 | 2025-03-10 23:37:28.639978 | TASK [Set manager_host address] 2025-03-10 23:37:28.744849 | orchestrator | ok 2025-03-10 23:37:28.756821 | 2025-03-10 23:37:28.756945 | LOOP [Update ansible collections] 2025-03-10 23:37:29.527080 | orchestrator | changed 2025-03-10 23:37:30.271947 | orchestrator | changed 2025-03-10 23:37:30.294202 | 2025-03-10 23:37:30.294355 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-10 23:37:40.827798 | orchestrator | ok 2025-03-10 23:37:40.839548 | 2025-03-10 23:37:40.839661 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-10 23:38:40.892200 | orchestrator | ok 2025-03-10 23:38:40.903836 | 2025-03-10 23:38:40.903937 | TASK [Fetch manager ssh hostkey] 2025-03-10 23:38:41.959126 | orchestrator | Output suppressed because no_log was given 2025-03-10 23:38:41.970217 | 2025-03-10 23:38:41.970339 | TASK [Get ssh keypair from terraform environment] 2025-03-10 23:38:42.547916 | orchestrator | changed 2025-03-10 23:38:42.566257 | 2025-03-10 23:38:42.566418 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-10 23:38:42.618001 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-10 23:38:42.629539 | 2025-03-10 23:38:42.629654 | TASK [Run manager part 0] 2025-03-10 23:38:43.495697 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-10 23:38:43.537911 | orchestrator | 2025-03-10 23:38:45.560773 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-10 23:38:45.560825 | orchestrator | 2025-03-10 23:38:45.560844 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-10 23:38:45.560862 | orchestrator | ok: [testbed-manager] 2025-03-10 23:38:47.416040 | orchestrator | 2025-03-10 23:38:47.416101 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-10 23:38:47.416113 | orchestrator | 2025-03-10 23:38:47.416119 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:38:47.416131 | orchestrator | ok: [testbed-manager] 2025-03-10 23:38:48.063237 | orchestrator | 2025-03-10 23:38:48.063368 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-10 23:38:48.063386 | orchestrator | ok: [testbed-manager] 2025-03-10 23:38:48.114956 | orchestrator | 2025-03-10 23:38:48.115005 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-10 23:38:48.115023 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.140400 | orchestrator | 2025-03-10 23:38:48.140434 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-10 23:38:48.140449 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.162752 | orchestrator | 2025-03-10 23:38:48.162780 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-10 23:38:48.162791 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.185519 | orchestrator | 2025-03-10 23:38:48.185546 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-10 23:38:48.185557 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.206425 | orchestrator | 2025-03-10 23:38:48.206448 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-10 23:38:48.206458 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.227199 | orchestrator | 2025-03-10 23:38:48.227224 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-10 23:38:48.227234 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:48.248475 | orchestrator | 2025-03-10 23:38:48.248500 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-10 23:38:48.248511 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:38:49.038112 | orchestrator | 2025-03-10 23:38:49.038149 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-10 23:38:49.038163 | orchestrator | changed: [testbed-manager] 2025-03-10 23:41:23.230826 | orchestrator | 2025-03-10 23:41:23.230926 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-10 23:41:23.230968 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:49.162587 | orchestrator | 2025-03-10 23:42:49.162665 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-10 23:42:49.162687 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:11.694425 | orchestrator | 2025-03-10 23:43:11.694577 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-10 23:43:11.694620 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:21.487649 | orchestrator | 2025-03-10 23:43:21.487766 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-10 23:43:21.487803 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:21.538850 | orchestrator | 2025-03-10 23:43:21.538938 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-10 23:43:21.538969 | orchestrator | ok: [testbed-manager] 2025-03-10 23:43:22.346333 | orchestrator | 2025-03-10 23:43:22.346435 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-10 23:43:22.346469 | orchestrator | ok: [testbed-manager] 2025-03-10 23:43:23.092188 | orchestrator | 2025-03-10 23:43:23.092292 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-10 23:43:23.092336 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:30.764119 | orchestrator | 2025-03-10 23:43:30.764230 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-10 23:43:30.764266 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:37.659296 | orchestrator | 2025-03-10 23:43:37.659411 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-10 23:43:37.659455 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:40.731523 | orchestrator | 2025-03-10 23:43:40.731615 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-10 23:43:40.731647 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:42.718902 | orchestrator | 2025-03-10 23:43:42.719007 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-10 23:43:42.719042 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:43.841003 | orchestrator | 2025-03-10 23:43:43.841106 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-10 23:43:43.841138 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-10 23:43:43.880439 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-10 23:43:43.880498 | orchestrator | 2025-03-10 23:43:43.880517 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-10 23:43:43.880541 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-10 23:43:47.057241 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-10 23:43:47.057308 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-10 23:43:47.057317 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-10 23:43:47.057333 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-10 23:43:47.650119 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-10 23:43:47.650221 | orchestrator | 2025-03-10 23:43:47.650242 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-10 23:43:47.650273 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:06.882361 | orchestrator | 2025-03-10 23:44:06.882449 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-10 23:44:06.882478 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-10 23:44:09.675131 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-10 23:44:09.675177 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-10 23:44:09.675183 | orchestrator | 2025-03-10 23:44:09.675191 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-10 23:44:09.675203 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-10 23:44:11.129928 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-10 23:44:11.129977 | orchestrator | 2025-03-10 23:44:11.129985 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-10 23:44:11.129993 | orchestrator | 2025-03-10 23:44:11.130000 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:44:11.130056 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:11.175225 | orchestrator | 2025-03-10 23:44:11.175274 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-10 23:44:11.175291 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:11.238545 | orchestrator | 2025-03-10 23:44:11.238598 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-10 23:44:11.238617 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:12.033267 | orchestrator | 2025-03-10 23:44:12.033942 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-10 23:44:12.033988 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:12.796251 | orchestrator | 2025-03-10 23:44:12.796321 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-10 23:44:12.796343 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:14.219569 | orchestrator | 2025-03-10 23:44:14.219667 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-10 23:44:14.219702 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-10 23:44:15.582472 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-10 23:44:15.582589 | orchestrator | 2025-03-10 23:44:15.582609 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-10 23:44:15.582641 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:17.401946 | orchestrator | 2025-03-10 23:44:17.402620 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-10 23:44:17.402666 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:44:18.012193 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-10 23:44:18.012304 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:44:18.012323 | orchestrator | 2025-03-10 23:44:18.012339 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-10 23:44:18.012371 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:18.079242 | orchestrator | 2025-03-10 23:44:18.079284 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-10 23:44:18.079306 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:18.963847 | orchestrator | 2025-03-10 23:44:18.963951 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-10 23:44:18.963979 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:44:19.002759 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:19.002864 | orchestrator | 2025-03-10 23:44:19.002881 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-10 23:44:19.002910 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:19.040805 | orchestrator | 2025-03-10 23:44:19.040915 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-10 23:44:19.040946 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:19.073392 | orchestrator | 2025-03-10 23:44:19.073424 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-10 23:44:19.073446 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:19.121198 | orchestrator | 2025-03-10 23:44:19.121263 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-10 23:44:19.121290 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:19.830885 | orchestrator | 2025-03-10 23:44:19.831023 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-10 23:44:19.831067 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:21.276963 | orchestrator | 2025-03-10 23:44:21.277100 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-10 23:44:21.277123 | orchestrator | 2025-03-10 23:44:21.277141 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:44:21.277174 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:22.298906 | orchestrator | 2025-03-10 23:44:22.299056 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-10 23:44:22.299098 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:22.392693 | orchestrator | 2025-03-10 23:44:22.392839 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:44:22.392861 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-10 23:44:22.392876 | orchestrator | 2025-03-10 23:44:22.817916 | orchestrator | changed 2025-03-10 23:44:22.838888 | 2025-03-10 23:44:22.839016 | TASK [Point out that the log in on the manager is now possible] 2025-03-10 23:44:22.888784 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-10 23:44:22.899102 | 2025-03-10 23:44:22.899260 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-10 23:44:22.943161 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-10 23:44:22.960073 | 2025-03-10 23:44:22.960271 | TASK [Run manager part 1 + 2] 2025-03-10 23:44:23.798396 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-10 23:44:23.856186 | orchestrator | 2025-03-10 23:44:26.411223 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-10 23:44:26.411286 | orchestrator | 2025-03-10 23:44:26.411301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:44:26.411319 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:26.444909 | orchestrator | 2025-03-10 23:44:26.444969 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-10 23:44:26.444992 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:26.482788 | orchestrator | 2025-03-10 23:44:26.482864 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-10 23:44:26.482884 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:26.520177 | orchestrator | 2025-03-10 23:44:26.520229 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-10 23:44:26.520247 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:26.577496 | orchestrator | 2025-03-10 23:44:26.577552 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-10 23:44:26.577570 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:26.633326 | orchestrator | 2025-03-10 23:44:26.633395 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-10 23:44:26.633421 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:26.675257 | orchestrator | 2025-03-10 23:44:26.675301 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-10 23:44:26.675314 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-10 23:44:27.404973 | orchestrator | 2025-03-10 23:44:27.405031 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-10 23:44:27.405051 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:27.446475 | orchestrator | 2025-03-10 23:44:27.446523 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-10 23:44:27.446540 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:28.881660 | orchestrator | 2025-03-10 23:44:28.881706 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-10 23:44:28.881723 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:29.419763 | orchestrator | 2025-03-10 23:44:29.419833 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-10 23:44:29.419852 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:30.631064 | orchestrator | 2025-03-10 23:44:30.631097 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-10 23:44:30.631109 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:44.250557 | orchestrator | 2025-03-10 23:44:44.250675 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-10 23:44:44.250712 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:44.976700 | orchestrator | 2025-03-10 23:44:44.976826 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-10 23:44:44.976860 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:45.033741 | orchestrator | 2025-03-10 23:44:45.033839 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-10 23:44:45.033878 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:44:46.136516 | orchestrator | 2025-03-10 23:44:46.136614 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-10 23:44:46.136646 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:47.146468 | orchestrator | 2025-03-10 23:44:47.146568 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-10 23:44:47.146602 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:47.736625 | orchestrator | 2025-03-10 23:44:47.736729 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-10 23:44:47.736762 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:47.779131 | orchestrator | 2025-03-10 23:44:47.779205 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-10 23:44:47.779232 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-10 23:44:50.120414 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-10 23:44:50.120465 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-10 23:44:50.120474 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-10 23:44:50.120488 | orchestrator | changed: [testbed-manager] 2025-03-10 23:45:00.329198 | orchestrator | 2025-03-10 23:45:00.329341 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-10 23:45:00.329381 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-10 23:45:01.495354 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-10 23:45:01.495485 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-10 23:45:01.495504 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-10 23:45:01.495520 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-10 23:45:01.495535 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-10 23:45:01.495549 | orchestrator | 2025-03-10 23:45:01.495564 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-10 23:45:01.495616 | orchestrator | changed: [testbed-manager] 2025-03-10 23:45:01.536796 | orchestrator | 2025-03-10 23:45:01.536887 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-10 23:45:01.536918 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:45:04.513427 | orchestrator | 2025-03-10 23:45:04.513522 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-10 23:45:04.513550 | orchestrator | changed: [testbed-manager] 2025-03-10 23:45:04.552428 | orchestrator | 2025-03-10 23:45:04.552514 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-10 23:45:04.552542 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:50.958432 | orchestrator | 2025-03-10 23:46:50.958546 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-10 23:46:50.958581 | orchestrator | changed: [testbed-manager] 2025-03-10 23:46:52.205991 | orchestrator | 2025-03-10 23:46:52.206121 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-10 23:46:52.206157 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:52.317696 | orchestrator | 2025-03-10 23:46:52.317752 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:46:52.317809 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-10 23:46:52.317827 | orchestrator | 2025-03-10 23:46:52.595301 | orchestrator | changed 2025-03-10 23:46:52.614384 | 2025-03-10 23:46:52.614531 | TASK [Reboot manager] 2025-03-10 23:46:54.176053 | orchestrator | changed 2025-03-10 23:46:54.193572 | 2025-03-10 23:46:54.193731 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-10 23:47:10.575762 | orchestrator | ok 2025-03-10 23:47:10.587313 | 2025-03-10 23:47:10.587435 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-10 23:48:10.641290 | orchestrator | ok 2025-03-10 23:48:10.652799 | 2025-03-10 23:48:10.652909 | TASK [Deploy manager + bootstrap nodes] 2025-03-10 23:48:13.310450 | orchestrator | 2025-03-10 23:48:13.315090 | orchestrator | # DEPLOY MANAGER 2025-03-10 23:48:13.315128 | orchestrator | 2025-03-10 23:48:13.315146 | orchestrator | + set -e 2025-03-10 23:48:13.315190 | orchestrator | + echo 2025-03-10 23:48:13.315208 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-10 23:48:13.315225 | orchestrator | + echo 2025-03-10 23:48:13.315249 | orchestrator | + cat /opt/manager-vars.sh 2025-03-10 23:48:13.315282 | orchestrator | export NUMBER_OF_NODES=6 2025-03-10 23:48:13.315415 | orchestrator | 2025-03-10 23:48:13.315435 | orchestrator | export CEPH_VERSION=quincy 2025-03-10 23:48:13.315450 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-10 23:48:13.315464 | orchestrator | export MANAGER_VERSION=8.1.0 2025-03-10 23:48:13.315478 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-10 23:48:13.315492 | orchestrator | 2025-03-10 23:48:13.315506 | orchestrator | export ARA=false 2025-03-10 23:48:13.315520 | orchestrator | export TEMPEST=false 2025-03-10 23:48:13.315534 | orchestrator | export IS_ZUUL=true 2025-03-10 23:48:13.315548 | orchestrator | 2025-03-10 23:48:13.315586 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-10 23:48:13.315602 | orchestrator | export EXTERNAL_API=false 2025-03-10 23:48:13.315616 | orchestrator | 2025-03-10 23:48:13.315630 | orchestrator | export IMAGE_USER=ubuntu 2025-03-10 23:48:13.315643 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-10 23:48:13.315658 | orchestrator | 2025-03-10 23:48:13.315672 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-10 23:48:13.315691 | orchestrator | 2025-03-10 23:48:13.316934 | orchestrator | + echo 2025-03-10 23:48:13.316954 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-10 23:48:13.316972 | orchestrator | ++ export INTERACTIVE=false 2025-03-10 23:48:13.316987 | orchestrator | ++ INTERACTIVE=false 2025-03-10 23:48:13.317001 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-10 23:48:13.317023 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-10 23:48:13.317040 | orchestrator | + source /opt/manager-vars.sh 2025-03-10 23:48:13.317258 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-10 23:48:13.317354 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-10 23:48:13.317383 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-10 23:48:13.372824 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-10 23:48:13.372847 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-10 23:48:13.372860 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-10 23:48:13.372882 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-10 23:48:13.372893 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-10 23:48:13.372904 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-10 23:48:13.372914 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-10 23:48:13.372924 | orchestrator | ++ export ARA=false 2025-03-10 23:48:13.372934 | orchestrator | ++ ARA=false 2025-03-10 23:48:13.372944 | orchestrator | ++ export TEMPEST=false 2025-03-10 23:48:13.372954 | orchestrator | ++ TEMPEST=false 2025-03-10 23:48:13.372965 | orchestrator | ++ export IS_ZUUL=true 2025-03-10 23:48:13.372974 | orchestrator | ++ IS_ZUUL=true 2025-03-10 23:48:13.372985 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-10 23:48:13.372995 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-10 23:48:13.373011 | orchestrator | ++ export EXTERNAL_API=false 2025-03-10 23:48:13.373022 | orchestrator | ++ EXTERNAL_API=false 2025-03-10 23:48:13.373032 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-10 23:48:13.373042 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-10 23:48:13.373052 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-10 23:48:13.373062 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-10 23:48:13.373074 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-10 23:48:13.373085 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-10 23:48:13.373095 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-10 23:48:13.373119 | orchestrator | + docker version 2025-03-10 23:48:13.644082 | orchestrator | Client: Docker Engine - Community 2025-03-10 23:48:13.644453 | orchestrator | Version: 26.1.4 2025-03-10 23:48:13.644474 | orchestrator | API version: 1.45 2025-03-10 23:48:13.644485 | orchestrator | Go version: go1.21.11 2025-03-10 23:48:13.644495 | orchestrator | Git commit: 5650f9b 2025-03-10 23:48:13.644506 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-10 23:48:13.644517 | orchestrator | OS/Arch: linux/amd64 2025-03-10 23:48:13.644527 | orchestrator | Context: default 2025-03-10 23:48:13.644537 | orchestrator | 2025-03-10 23:48:13.644547 | orchestrator | Server: Docker Engine - Community 2025-03-10 23:48:13.644558 | orchestrator | Engine: 2025-03-10 23:48:13.644600 | orchestrator | Version: 26.1.4 2025-03-10 23:48:13.644610 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-03-10 23:48:13.644621 | orchestrator | Go version: go1.21.11 2025-03-10 23:48:13.644633 | orchestrator | Git commit: de5c9cf 2025-03-10 23:48:13.644668 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-10 23:48:13.644679 | orchestrator | OS/Arch: linux/amd64 2025-03-10 23:48:13.644689 | orchestrator | Experimental: false 2025-03-10 23:48:13.644700 | orchestrator | containerd: 2025-03-10 23:48:13.644710 | orchestrator | Version: 1.7.25 2025-03-10 23:48:13.644720 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-03-10 23:48:13.644731 | orchestrator | runc: 2025-03-10 23:48:13.644741 | orchestrator | Version: 1.2.4 2025-03-10 23:48:13.644752 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-03-10 23:48:13.644762 | orchestrator | docker-init: 2025-03-10 23:48:13.644772 | orchestrator | Version: 0.19.0 2025-03-10 23:48:13.644788 | orchestrator | GitCommit: de40ad0 2025-03-10 23:48:13.648779 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-10 23:48:13.659242 | orchestrator | + set -e 2025-03-10 23:48:13.659489 | orchestrator | + source /opt/manager-vars.sh 2025-03-10 23:48:13.659516 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-10 23:48:13.659527 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-10 23:48:13.659537 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-10 23:48:13.659548 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-10 23:48:13.659558 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-10 23:48:13.659586 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-10 23:48:13.659597 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-10 23:48:13.659607 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-10 23:48:13.659617 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-10 23:48:13.659627 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-10 23:48:13.659637 | orchestrator | ++ export ARA=false 2025-03-10 23:48:13.659647 | orchestrator | ++ ARA=false 2025-03-10 23:48:13.659657 | orchestrator | ++ export TEMPEST=false 2025-03-10 23:48:13.659667 | orchestrator | ++ TEMPEST=false 2025-03-10 23:48:13.659678 | orchestrator | ++ export IS_ZUUL=true 2025-03-10 23:48:13.659688 | orchestrator | ++ IS_ZUUL=true 2025-03-10 23:48:13.659698 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-10 23:48:13.659709 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-10 23:48:13.659724 | orchestrator | ++ export EXTERNAL_API=false 2025-03-10 23:48:13.659735 | orchestrator | ++ EXTERNAL_API=false 2025-03-10 23:48:13.659749 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-10 23:48:13.659759 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-10 23:48:13.659769 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-10 23:48:13.659779 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-10 23:48:13.659789 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-10 23:48:13.659800 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-10 23:48:13.659810 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-10 23:48:13.659820 | orchestrator | ++ export INTERACTIVE=false 2025-03-10 23:48:13.659830 | orchestrator | ++ INTERACTIVE=false 2025-03-10 23:48:13.659840 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-10 23:48:13.659850 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-10 23:48:13.659864 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-10 23:48:13.669885 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-03-10 23:48:13.669921 | orchestrator | + set -e 2025-03-10 23:48:13.678762 | orchestrator | + VERSION=8.1.0 2025-03-10 23:48:13.678786 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:48:13.678811 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-10 23:48:13.683663 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:48:13.683698 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:48:13.687319 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-03-10 23:48:13.693884 | orchestrator | /opt/configuration ~ 2025-03-10 23:48:13.696393 | orchestrator | + set -e 2025-03-10 23:48:13.696413 | orchestrator | + pushd /opt/configuration 2025-03-10 23:48:13.696428 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-10 23:48:13.696448 | orchestrator | + source /opt/venv/bin/activate 2025-03-10 23:48:13.697365 | orchestrator | ++ deactivate nondestructive 2025-03-10 23:48:13.697668 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:13.697686 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:13.697701 | orchestrator | ++ hash -r 2025-03-10 23:48:13.697715 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:13.697728 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-10 23:48:13.697743 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-10 23:48:13.697761 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-10 23:48:13.697793 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-10 23:48:13.697807 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-10 23:48:13.697821 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-10 23:48:13.697836 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-10 23:48:13.697850 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:48:13.697869 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:48:15.244234 | orchestrator | ++ export PATH 2025-03-10 23:48:15.244364 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:15.244383 | orchestrator | ++ '[' -z '' ']' 2025-03-10 23:48:15.244398 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-10 23:48:15.244413 | orchestrator | ++ PS1='(venv) ' 2025-03-10 23:48:15.244427 | orchestrator | ++ export PS1 2025-03-10 23:48:15.244441 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-10 23:48:15.244456 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-10 23:48:15.244470 | orchestrator | ++ hash -r 2025-03-10 23:48:15.244485 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-03-10 23:48:15.244520 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-03-10 23:48:15.246551 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-03-10 23:48:15.249496 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-03-10 23:48:15.252270 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-03-10 23:48:15.254881 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-03-10 23:48:15.275316 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-03-10 23:48:15.278585 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-03-10 23:48:15.280711 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-03-10 23:48:15.283257 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-03-10 23:48:15.358245 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-03-10 23:48:15.360331 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-03-10 23:48:15.362405 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-03-10 23:48:15.364240 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-03-10 23:48:15.369449 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-03-10 23:48:15.754891 | orchestrator | ++ which gilt 2025-03-10 23:48:15.758713 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-03-10 23:48:16.112491 | orchestrator | + /opt/venv/bin/gilt overlay 2025-03-10 23:48:16.112623 | orchestrator | osism.cfg-generics: 2025-03-10 23:48:17.773906 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-03-10 23:48:17.774108 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-03-10 23:48:18.998420 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-03-10 23:48:18.998539 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-03-10 23:48:18.998603 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-03-10 23:48:18.998640 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-03-10 23:48:19.011289 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-03-10 23:48:19.390067 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-03-10 23:48:19.453980 | orchestrator | ~ 2025-03-10 23:48:19.456209 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-10 23:48:19.456232 | orchestrator | + deactivate 2025-03-10 23:48:19.456264 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-10 23:48:19.456279 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:48:19.456291 | orchestrator | + export PATH 2025-03-10 23:48:19.456304 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-10 23:48:19.456317 | orchestrator | + '[' -n '' ']' 2025-03-10 23:48:19.456329 | orchestrator | + hash -r 2025-03-10 23:48:19.456341 | orchestrator | + '[' -n '' ']' 2025-03-10 23:48:19.456354 | orchestrator | + unset VIRTUAL_ENV 2025-03-10 23:48:19.456366 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-10 23:48:19.456379 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-10 23:48:19.456394 | orchestrator | + unset -f deactivate 2025-03-10 23:48:19.456407 | orchestrator | + popd 2025-03-10 23:48:19.456425 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-10 23:48:19.457089 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-10 23:48:19.457114 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-10 23:48:19.521183 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-10 23:48:19.553919 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-10 23:48:19.553961 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-10 23:48:19.553984 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-10 23:48:19.554126 | orchestrator | + source /opt/venv/bin/activate 2025-03-10 23:48:19.554144 | orchestrator | ++ deactivate nondestructive 2025-03-10 23:48:19.554163 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:19.554246 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:19.554264 | orchestrator | ++ hash -r 2025-03-10 23:48:19.554282 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:19.554435 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-10 23:48:19.554680 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-10 23:48:19.554705 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-10 23:48:19.554982 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-10 23:48:19.555000 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-10 23:48:19.555014 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-10 23:48:19.555028 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-10 23:48:19.555047 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:48:19.555150 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:48:19.555169 | orchestrator | ++ export PATH 2025-03-10 23:48:19.555183 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:48:19.555197 | orchestrator | ++ '[' -z '' ']' 2025-03-10 23:48:19.555215 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-10 23:48:19.555375 | orchestrator | ++ PS1='(venv) ' 2025-03-10 23:48:19.555418 | orchestrator | ++ export PS1 2025-03-10 23:48:19.555433 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-10 23:48:19.555447 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-10 23:48:19.555463 | orchestrator | ++ hash -r 2025-03-10 23:48:19.555481 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-10 23:48:21.194321 | orchestrator | 2025-03-10 23:48:21.924501 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-10 23:48:21.924649 | orchestrator | 2025-03-10 23:48:21.924667 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-10 23:48:21.924695 | orchestrator | ok: [testbed-manager] 2025-03-10 23:48:23.056490 | orchestrator | 2025-03-10 23:48:23.056649 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-10 23:48:23.056687 | orchestrator | changed: [testbed-manager] 2025-03-10 23:48:25.999265 | orchestrator | 2025-03-10 23:48:25.999404 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-10 23:48:25.999425 | orchestrator | 2025-03-10 23:48:25.999440 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:48:25.999472 | orchestrator | ok: [testbed-manager] 2025-03-10 23:48:33.847905 | orchestrator | 2025-03-10 23:48:33.848037 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-10 23:48:33.848099 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-10 23:49:41.015056 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-03-10 23:49:41.015174 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-03-10 23:49:41.015189 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-03-10 23:49:41.015201 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-03-10 23:49:41.015213 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-03-10 23:49:41.015224 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-03-10 23:49:41.015234 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-03-10 23:49:41.015245 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-03-10 23:49:41.015262 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-netbox:0.20241219.2) 2025-03-10 23:49:41.015273 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-03-10 23:49:41.015284 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-03-10 23:49:41.015294 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-03-10 23:49:41.015304 | orchestrator | 2025-03-10 23:49:41.015315 | orchestrator | TASK [Check status] ************************************************************ 2025-03-10 23:49:41.015341 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-10 23:49:41.015352 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-10 23:49:41.015362 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-03-10 23:49:41.015372 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-03-10 23:49:41.015385 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j762523732426.1565', 'results_file': '/home/dragon/.ansible_async/j762523732426.1565', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015403 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j301737985380.1590', 'results_file': '/home/dragon/.ansible_async/j301737985380.1590', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015414 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j731851204361.1615', 'results_file': '/home/dragon/.ansible_async/j731851204361.1615', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015431 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j583714609041.1648', 'results_file': '/home/dragon/.ansible_async/j583714609041.1648', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015445 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-10 23:49:41.015456 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j364976880978.1680', 'results_file': '/home/dragon/.ansible_async/j364976880978.1680', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015467 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789341204575.1714', 'results_file': '/home/dragon/.ansible_async/j789341204575.1714', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-10 23:49:41.015575 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j323018749775.1746', 'results_file': '/home/dragon/.ansible_async/j323018749775.1746', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015586 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j422531732839.1780', 'results_file': '/home/dragon/.ansible_async/j422531732839.1780', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015597 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j605857260298.1812', 'results_file': '/home/dragon/.ansible_async/j605857260298.1812', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015613 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j573287829013.1845', 'results_file': '/home/dragon/.ansible_async/j573287829013.1845', 'changed': True, 'item': 'registry.osism.tech/osism/osism-netbox:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015624 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j458228381837.1877', 'results_file': '/home/dragon/.ansible_async/j458228381837.1877', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015636 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j732777013509.1909', 'results_file': '/home/dragon/.ansible_async/j732777013509.1909', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.015657 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j53731100406.1940', 'results_file': '/home/dragon/.ansible_async/j53731100406.1940', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:49:41.054648 | orchestrator | 2025-03-10 23:49:41.054724 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-10 23:49:41.054750 | orchestrator | ok: [testbed-manager] 2025-03-10 23:49:41.713886 | orchestrator | 2025-03-10 23:49:41.714008 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-10 23:49:41.714110 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:42.080240 | orchestrator | 2025-03-10 23:49:42.080344 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-10 23:49:42.080376 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:42.450784 | orchestrator | 2025-03-10 23:49:42.450886 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-10 23:49:42.450920 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:42.511669 | orchestrator | 2025-03-10 23:49:42.511724 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-10 23:49:42.511750 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:42.860795 | orchestrator | 2025-03-10 23:49:42.860892 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-10 23:49:42.860924 | orchestrator | ok: [testbed-manager] 2025-03-10 23:49:43.059344 | orchestrator | 2025-03-10 23:49:43.059414 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-10 23:49:43.059442 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:45.113863 | orchestrator | 2025-03-10 23:49:45.113973 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-10 23:49:45.113991 | orchestrator | 2025-03-10 23:49:45.114005 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:49:45.114086 | orchestrator | ok: [testbed-manager] 2025-03-10 23:49:45.349130 | orchestrator | 2025-03-10 23:49:45.349250 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-10 23:49:45.349285 | orchestrator | 2025-03-10 23:49:45.475106 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-10 23:49:45.475250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-10 23:49:46.680977 | orchestrator | 2025-03-10 23:49:46.681103 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-10 23:49:46.681141 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-10 23:49:48.670728 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-10 23:49:48.670864 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-10 23:49:48.670883 | orchestrator | 2025-03-10 23:49:48.670899 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-10 23:49:48.670930 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-10 23:49:49.381629 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-10 23:49:49.381742 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-10 23:49:49.381762 | orchestrator | 2025-03-10 23:49:49.381778 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-10 23:49:49.381808 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:49:50.164882 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:50.164986 | orchestrator | 2025-03-10 23:49:50.165005 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-10 23:49:50.165036 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:49:50.239324 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:50.239393 | orchestrator | 2025-03-10 23:49:50.239409 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-10 23:49:50.239436 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:50.667568 | orchestrator | 2025-03-10 23:49:50.667679 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-10 23:49:50.667715 | orchestrator | ok: [testbed-manager] 2025-03-10 23:49:50.785304 | orchestrator | 2025-03-10 23:49:50.785364 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-10 23:49:50.785390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-10 23:49:51.900844 | orchestrator | 2025-03-10 23:49:51.900961 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-10 23:49:51.900997 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:52.843935 | orchestrator | 2025-03-10 23:49:52.844044 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-10 23:49:52.844079 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:56.085015 | orchestrator | 2025-03-10 23:49:56.085135 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-10 23:49:56.085172 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:56.397295 | orchestrator | 2025-03-10 23:49:56.397415 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-10 23:49:56.397495 | orchestrator | 2025-03-10 23:49:56.573902 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-10 23:49:56.573994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:49:59.909567 | orchestrator | 2025-03-10 23:49:59.909685 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-10 23:49:59.909722 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:00.191505 | orchestrator | 2025-03-10 23:50:00.191620 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-10 23:50:00.191656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-10 23:50:01.431520 | orchestrator | 2025-03-10 23:50:01.431651 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-10 23:50:01.431688 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-10 23:50:01.546271 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-10 23:50:01.546370 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-10 23:50:01.546418 | orchestrator | 2025-03-10 23:50:01.546435 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-10 23:50:01.546506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-10 23:50:02.373717 | orchestrator | 2025-03-10 23:50:02.373835 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-10 23:50:02.373885 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-10 23:50:03.318274 | orchestrator | 2025-03-10 23:50:03.318385 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-10 23:50:03.318418 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:50:03.779795 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:03.779917 | orchestrator | 2025-03-10 23:50:03.779936 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-10 23:50:03.779969 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:04.222917 | orchestrator | 2025-03-10 23:50:04.223023 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-10 23:50:04.223053 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:04.288935 | orchestrator | 2025-03-10 23:50:04.288972 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-10 23:50:04.288993 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:05.040747 | orchestrator | 2025-03-10 23:50:05.040850 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-10 23:50:05.040885 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:05.158298 | orchestrator | 2025-03-10 23:50:05.158387 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-10 23:50:05.158417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-10 23:50:06.031125 | orchestrator | 2025-03-10 23:50:06.031214 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-10 23:50:06.031247 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-10 23:50:06.784444 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-10 23:50:06.784587 | orchestrator | 2025-03-10 23:50:06.784606 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-10 23:50:06.784637 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-10 23:50:07.509040 | orchestrator | 2025-03-10 23:50:07.509132 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-10 23:50:07.509169 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:07.579798 | orchestrator | 2025-03-10 23:50:07.579854 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-10 23:50:07.579880 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:08.293688 | orchestrator | 2025-03-10 23:50:08.293796 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-10 23:50:08.293826 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:10.298613 | orchestrator | 2025-03-10 23:50:10.298733 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-10 23:50:10.298769 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:50:16.892973 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:50:16.893110 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:50:16.893130 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:16.893147 | orchestrator | 2025-03-10 23:50:16.893162 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-10 23:50:16.893194 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-10 23:50:17.641352 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-10 23:50:17.641509 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-10 23:50:17.641529 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-10 23:50:17.641544 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-10 23:50:17.641559 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-10 23:50:17.641599 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-10 23:50:17.641613 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-10 23:50:17.641627 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-10 23:50:17.641641 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-10 23:50:17.641655 | orchestrator | 2025-03-10 23:50:17.641670 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-10 23:50:17.641702 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-10 23:50:17.841964 | orchestrator | 2025-03-10 23:50:17.842074 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-10 23:50:17.842103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-10 23:50:18.633054 | orchestrator | 2025-03-10 23:50:18.633132 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-10 23:50:18.633160 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:19.386000 | orchestrator | 2025-03-10 23:50:19.386139 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-10 23:50:19.386171 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:20.222742 | orchestrator | 2025-03-10 23:50:20.222846 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-10 23:50:20.222879 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:24.880080 | orchestrator | 2025-03-10 23:50:24.880213 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-10 23:50:24.880249 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:25.941645 | orchestrator | 2025-03-10 23:50:25.941756 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-10 23:50:25.941791 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:48.233367 | orchestrator | 2025-03-10 23:50:48.233545 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-10 23:50:48.233584 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-10 23:50:48.324443 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:48.324521 | orchestrator | 2025-03-10 23:50:48.324541 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-10 23:50:48.324568 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:48.376369 | orchestrator | 2025-03-10 23:50:48.376471 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-10 23:50:48.376489 | orchestrator | 2025-03-10 23:50:48.376507 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-10 23:50:48.376532 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:48.465772 | orchestrator | 2025-03-10 23:50:48.465838 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-10 23:50:48.465864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-10 23:50:49.419789 | orchestrator | 2025-03-10 23:50:49.419898 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-10 23:50:49.419929 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:49.527078 | orchestrator | 2025-03-10 23:50:49.527133 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-10 23:50:49.527158 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:49.606962 | orchestrator | 2025-03-10 23:50:49.606994 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-10 23:50:49.607015 | orchestrator | ok: [testbed-manager] => { 2025-03-10 23:50:50.423070 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-10 23:50:50.423167 | orchestrator | } 2025-03-10 23:50:50.423183 | orchestrator | 2025-03-10 23:50:50.423198 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-10 23:50:50.423228 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:51.518601 | orchestrator | 2025-03-10 23:50:51.518704 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-10 23:50:51.518765 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:51.607353 | orchestrator | 2025-03-10 23:50:51.607442 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-10 23:50:51.607468 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:51.680918 | orchestrator | 2025-03-10 23:50:51.680955 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-10 23:50:51.680990 | orchestrator | ok: [testbed-manager] => { 2025-03-10 23:50:51.766892 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-10 23:50:51.766945 | orchestrator | } 2025-03-10 23:50:51.766959 | orchestrator | 2025-03-10 23:50:51.766971 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-10 23:50:51.766993 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:51.843452 | orchestrator | 2025-03-10 23:50:51.843524 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-10 23:50:51.843548 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:51.909424 | orchestrator | 2025-03-10 23:50:51.909483 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-10 23:50:51.909512 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:51.976338 | orchestrator | 2025-03-10 23:50:51.976365 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-10 23:50:51.976383 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:52.046582 | orchestrator | 2025-03-10 23:50:52.046612 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-10 23:50:52.046631 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:52.259394 | orchestrator | 2025-03-10 23:50:52.259485 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-10 23:50:52.259509 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:53.807494 | orchestrator | 2025-03-10 23:50:53.807614 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-10 23:50:53.807656 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:53.937320 | orchestrator | 2025-03-10 23:50:53.937442 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-10 23:50:53.937477 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:54.015766 | orchestrator | 2025-03-10 23:51:54.015886 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-10 23:51:54.015923 | orchestrator | Pausing for 60 seconds 2025-03-10 23:51:54.120256 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:54.120318 | orchestrator | 2025-03-10 23:51:54.120334 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-10 23:51:54.120408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-10 23:57:11.664293 | orchestrator | 2025-03-10 23:57:11.664430 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-10 23:57:11.664471 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-10 23:57:14.357940 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-10 23:57:14.358149 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-10 23:57:14.358174 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-10 23:57:14.358191 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-10 23:57:14.358206 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-10 23:57:14.358223 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-10 23:57:14.358238 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-10 23:57:14.358253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-10 23:57:14.358267 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-10 23:57:14.358312 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-10 23:57:14.358328 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-10 23:57:14.358343 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-10 23:57:14.358358 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-10 23:57:14.358373 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-10 23:57:14.358388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-10 23:57:14.358403 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-10 23:57:14.358418 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-10 23:57:14.358433 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-10 23:57:14.358461 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-10 23:57:14.358477 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-10 23:57:14.358494 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-10 23:57:14.358510 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-10 23:57:14.358526 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-03-10 23:57:14.358542 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-03-10 23:57:14.358558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-03-10 23:57:14.358573 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-03-10 23:57:14.358590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (33 retries left). 2025-03-10 23:57:14.358606 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (32 retries left). 2025-03-10 23:57:14.358622 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (31 retries left). 2025-03-10 23:57:14.358638 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:14.358656 | orchestrator | 2025-03-10 23:57:14.358673 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-10 23:57:14.358689 | orchestrator | 2025-03-10 23:57:14.358706 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:57:14.358738 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:14.495132 | orchestrator | 2025-03-10 23:57:14.495173 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-10 23:57:14.495198 | orchestrator | 2025-03-10 23:57:14.565020 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-10 23:57:14.565071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:57:16.806261 | orchestrator | 2025-03-10 23:57:16.806384 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-10 23:57:16.806422 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:16.865908 | orchestrator | 2025-03-10 23:57:16.865997 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-10 23:57:16.866109 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:16.988407 | orchestrator | 2025-03-10 23:57:16.988502 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-10 23:57:16.988534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-10 23:57:20.096898 | orchestrator | 2025-03-10 23:57:20.097023 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-10 23:57:20.097062 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-10 23:57:20.810594 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-10 23:57:20.810707 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-10 23:57:20.810726 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-10 23:57:20.810740 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-10 23:57:20.810755 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-10 23:57:20.810770 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-10 23:57:20.810784 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-10 23:57:20.810797 | orchestrator | 2025-03-10 23:57:20.810812 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-10 23:57:20.810842 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:20.917717 | orchestrator | 2025-03-10 23:57:20.917803 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-10 23:57:20.917834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-10 23:57:22.283860 | orchestrator | 2025-03-10 23:57:22.283986 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-10 23:57:22.284025 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-10 23:57:22.985168 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-10 23:57:22.985285 | orchestrator | 2025-03-10 23:57:22.985303 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-10 23:57:22.985335 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:23.051514 | orchestrator | 2025-03-10 23:57:23.051553 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-10 23:57:23.051576 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:57:23.117276 | orchestrator | 2025-03-10 23:57:23.117331 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-10 23:57:23.117356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-10 23:57:24.679866 | orchestrator | 2025-03-10 23:57:24.679979 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-10 23:57:24.680013 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:57:25.371942 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:57:25.372052 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:25.372070 | orchestrator | 2025-03-10 23:57:25.372130 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-10 23:57:25.372161 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:25.488430 | orchestrator | 2025-03-10 23:57:25.488509 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-10 23:57:25.488541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-10 23:57:26.167353 | orchestrator | 2025-03-10 23:57:26.167468 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-10 23:57:26.167504 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:57:26.831206 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:26.831314 | orchestrator | 2025-03-10 23:57:26.831333 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-10 23:57:26.831367 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:26.953655 | orchestrator | 2025-03-10 23:57:26.953741 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-10 23:57:26.953771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-10 23:57:27.622787 | orchestrator | 2025-03-10 23:57:27.622957 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-10 23:57:27.623031 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:28.064511 | orchestrator | 2025-03-10 23:57:28.064618 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-10 23:57:28.064652 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:29.444276 | orchestrator | 2025-03-10 23:57:29.444388 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-10 23:57:29.444422 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-10 23:57:30.156844 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-10 23:57:30.156957 | orchestrator | 2025-03-10 23:57:30.156977 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-10 23:57:30.157006 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:30.584327 | orchestrator | 2025-03-10 23:57:30.584451 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-10 23:57:30.584488 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:30.622490 | orchestrator | 2025-03-10 23:57:30.622567 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-10 23:57:30.622595 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:57:31.349336 | orchestrator | 2025-03-10 23:57:31.349449 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-10 23:57:31.349484 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:31.445743 | orchestrator | 2025-03-10 23:57:31.445843 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-10 23:57:31.445885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-10 23:57:31.495216 | orchestrator | 2025-03-10 23:57:31.495292 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-10 23:57:31.495323 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:33.700619 | orchestrator | 2025-03-10 23:57:33.700754 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-10 23:57:33.700793 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-10 23:57:34.495864 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-10 23:57:34.495972 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-10 23:57:34.495989 | orchestrator | 2025-03-10 23:57:34.496004 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-10 23:57:34.496036 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:34.611508 | orchestrator | 2025-03-10 23:57:34.611569 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-10 23:57:34.611595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-10 23:57:34.672037 | orchestrator | 2025-03-10 23:57:34.672092 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-10 23:57:34.672115 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:35.417372 | orchestrator | 2025-03-10 23:57:35.417481 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-10 23:57:35.417506 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-10 23:57:35.510523 | orchestrator | 2025-03-10 23:57:35.510564 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-10 23:57:35.510588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-10 23:57:36.262525 | orchestrator | 2025-03-10 23:57:36.262637 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-10 23:57:36.262671 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:36.952569 | orchestrator | 2025-03-10 23:57:36.952721 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-10 23:57:36.952774 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:37.013094 | orchestrator | 2025-03-10 23:57:37.013164 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-10 23:57:37.013193 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:57:37.075222 | orchestrator | 2025-03-10 23:57:37.075334 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-10 23:57:37.075365 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:37.967221 | orchestrator | 2025-03-10 23:57:37.967337 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-10 23:57:37.967372 | orchestrator | changed: [testbed-manager] 2025-03-10 23:58:01.757165 | orchestrator | 2025-03-10 23:58:01.757340 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-10 23:58:01.757384 | orchestrator | changed: [testbed-manager] 2025-03-10 23:58:02.463641 | orchestrator | 2025-03-10 23:58:02.463791 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-10 23:58:02.463831 | orchestrator | ok: [testbed-manager] 2025-03-10 23:58:05.183850 | orchestrator | 2025-03-10 23:58:05.184010 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-10 23:58:05.184089 | orchestrator | changed: [testbed-manager] 2025-03-10 23:58:05.272156 | orchestrator | 2025-03-10 23:58:05.272208 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-10 23:58:05.272237 | orchestrator | ok: [testbed-manager] 2025-03-10 23:58:05.353885 | orchestrator | 2025-03-10 23:58:05.353916 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-10 23:58:05.353931 | orchestrator | 2025-03-10 23:58:05.353946 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-10 23:58:05.353967 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:59:05.424271 | orchestrator | 2025-03-10 23:59:05.424423 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-10 23:59:05.424465 | orchestrator | Pausing for 60 seconds 2025-03-10 23:59:13.198277 | orchestrator | changed: [testbed-manager] 2025-03-10 23:59:13.198410 | orchestrator | 2025-03-10 23:59:13.198430 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-10 23:59:13.198464 | orchestrator | changed: [testbed-manager] 2025-03-10 23:59:55.241077 | orchestrator | 2025-03-10 23:59:55.241236 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-10 23:59:55.241294 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-11 00:00:02.964678 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-11 00:00:02.964812 | orchestrator | changed: [testbed-manager] 2025-03-11 00:00:02.964832 | orchestrator | 2025-03-11 00:00:02.964848 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-11 00:00:02.964879 | orchestrator | changed: [testbed-manager] 2025-03-11 00:00:03.075809 | orchestrator | 2025-03-11 00:00:03.075896 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-11 00:00:03.075941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-11 00:00:03.134924 | orchestrator | 2025-03-11 00:00:03.135017 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-11 00:00:03.135034 | orchestrator | 2025-03-11 00:00:03.135049 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-11 00:00:03.135074 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:00:03.336637 | orchestrator | 2025-03-11 00:00:03.336737 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:00:03.336756 | orchestrator | testbed-manager : ok=103 changed=55 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-11 00:00:03.336770 | orchestrator | 2025-03-11 00:00:03.336800 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-11 00:00:03.342743 | orchestrator | + deactivate 2025-03-11 00:00:03.342781 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-11 00:00:03.342797 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-11 00:00:03.342812 | orchestrator | + export PATH 2025-03-11 00:00:03.342827 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-11 00:00:03.342842 | orchestrator | + '[' -n '' ']' 2025-03-11 00:00:03.342856 | orchestrator | + hash -r 2025-03-11 00:00:03.342870 | orchestrator | + '[' -n '' ']' 2025-03-11 00:00:03.342914 | orchestrator | + unset VIRTUAL_ENV 2025-03-11 00:00:03.342929 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-11 00:00:03.342943 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-11 00:00:03.342957 | orchestrator | + unset -f deactivate 2025-03-11 00:00:03.343018 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-11 00:00:03.343045 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-11 00:00:03.343614 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-11 00:00:03.343646 | orchestrator | + local max_attempts=60 2025-03-11 00:00:03.343663 | orchestrator | + local name=ceph-ansible 2025-03-11 00:00:03.343681 | orchestrator | + local attempt_num=1 2025-03-11 00:00:03.343706 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-11 00:00:03.370244 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:00:03.371177 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-11 00:00:03.371211 | orchestrator | + local max_attempts=60 2025-03-11 00:00:03.371226 | orchestrator | + local name=kolla-ansible 2025-03-11 00:00:03.371252 | orchestrator | + local attempt_num=1 2025-03-11 00:00:03.371272 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-11 00:00:03.399196 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:00:03.399820 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-11 00:00:03.399854 | orchestrator | + local max_attempts=60 2025-03-11 00:00:03.399869 | orchestrator | + local name=osism-ansible 2025-03-11 00:00:03.399885 | orchestrator | + local attempt_num=1 2025-03-11 00:00:03.399908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-11 00:00:03.426531 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:00:04.820956 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-11 00:00:04.821119 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-11 00:00:04.821159 | orchestrator | ++ semver 8.1.0 8.0.0 2025-03-11 00:00:04.870265 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-11 00:00:04.870959 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-03-11 00:00:04.871019 | orchestrator | + local max_attempts=60 2025-03-11 00:00:04.871037 | orchestrator | + local name=netbox-netbox-1 2025-03-11 00:00:04.871051 | orchestrator | + local attempt_num=1 2025-03-11 00:00:04.871071 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-03-11 00:00:04.902097 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:00:04.909595 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-03-11 00:00:04.909657 | orchestrator | + set -e 2025-03-11 00:00:07.007055 | orchestrator | + osism netbox import 2025-03-11 00:00:07.007189 | orchestrator | 2025-03-11 00:00:07 | INFO  | Task 61821ff1-a34c-4322-b67e-9a2d155f028c is running. Wait. No more output. 2025-03-11 00:00:11.107209 | orchestrator | + osism netbox init 2025-03-11 00:00:12.897676 | orchestrator | 2025-03-11 00:00:12 | INFO  | Task 2897c9f1-6f0c-4954-b4fd-39d980b1f587 was prepared for execution. 2025-03-11 00:00:15.088244 | orchestrator | 2025-03-11 00:00:12 | INFO  | It takes a moment until task 2897c9f1-6f0c-4954-b4fd-39d980b1f587 has been started and output is visible here. 2025-03-11 00:00:15.088384 | orchestrator | 2025-03-11 00:00:15.088641 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-03-11 00:00:15.089544 | orchestrator | 2025-03-11 00:00:15.090108 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-03-11 00:00:16.239630 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-03-11 00:00:16.240083 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-03-11 00:00:16.240195 | orchestrator | Python interpreter could change the meaning of that path. See 2025-03-11 00:00:16.240592 | orchestrator | https://docs.ansible.com/ansible- 2025-03-11 00:00:16.240626 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-03-11 00:00:16.247414 | orchestrator | ok: [localhost] 2025-03-11 00:00:16.249424 | orchestrator | 2025-03-11 00:00:16.250295 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-03-11 00:00:16.250776 | orchestrator | 2025-03-11 00:00:16.251123 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-03-11 00:00:17.957582 | orchestrator | changed: [localhost] 2025-03-11 00:00:17.958328 | orchestrator | 2025-03-11 00:00:17.958372 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-03-11 00:00:19.804528 | orchestrator | changed: [localhost] 2025-03-11 00:00:19.805291 | orchestrator | 2025-03-11 00:00:19.805327 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-03-11 00:00:19.805610 | orchestrator | 2025-03-11 00:00:19.805921 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-03-11 00:00:27.821184 | orchestrator | changed: [localhost] 2025-03-11 00:00:27.821394 | orchestrator | 2025-03-11 00:00:27.822426 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-03-11 00:00:29.213094 | orchestrator | changed: [localhost] 2025-03-11 00:00:29.214106 | orchestrator | 2025-03-11 00:00:29.215036 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-03-11 00:00:29.215813 | orchestrator | 2025-03-11 00:00:29.216665 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-03-11 00:00:30.745948 | orchestrator | changed: [localhost] 2025-03-11 00:00:32.097179 | orchestrator | 2025-03-11 00:00:32.097284 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-03-11 00:00:32.097318 | orchestrator | changed: [localhost] 2025-03-11 00:00:32.097562 | orchestrator | 2025-03-11 00:00:32.097597 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:00:32.097712 | orchestrator | 2025-03-11 00:00:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:00:32.097858 | orchestrator | 2025-03-11 00:00:32 | INFO  | Please wait and do not abort execution. 2025-03-11 00:00:32.097886 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:00:32.098434 | orchestrator | 2025-03-11 00:00:32.479510 | orchestrator | + osism netbox manage 1000 2025-03-11 00:00:34.060308 | orchestrator | 2025-03-11 00:00:34 | INFO  | Task 08637869-b8da-4b3b-9538-322280c324a2 was prepared for execution. 2025-03-11 00:00:36.260692 | orchestrator | 2025-03-11 00:00:34 | INFO  | It takes a moment until task 08637869-b8da-4b3b-9538-322280c324a2 has been started and output is visible here. 2025-03-11 00:00:36.260829 | orchestrator | 2025-03-11 00:00:36.261826 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-03-11 00:00:36.262137 | orchestrator | 2025-03-11 00:00:36.262171 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-03-11 00:00:38.087393 | orchestrator | changed: [localhost] 2025-03-11 00:00:45.329486 | orchestrator | 2025-03-11 00:00:45.329582 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-03-11 00:00:45.329610 | orchestrator | changed: [localhost] 2025-03-11 00:00:45.330381 | orchestrator | 2025-03-11 00:00:45.330857 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-03-11 00:00:52.511952 | orchestrator | changed: [localhost] 2025-03-11 00:00:52.512883 | orchestrator | 2025-03-11 00:00:52.512923 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-03-11 00:00:59.535830 | orchestrator | changed: [localhost] 2025-03-11 00:01:02.077144 | orchestrator | 2025-03-11 00:01:02.077263 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-03-11 00:01:02.077299 | orchestrator | changed: [localhost] 2025-03-11 00:01:04.958232 | orchestrator | 2025-03-11 00:01:04.958332 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-03-11 00:01:04.958356 | orchestrator | changed: [localhost] 2025-03-11 00:01:04.958873 | orchestrator | 2025-03-11 00:01:08.115717 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-03-11 00:01:08.115850 | orchestrator | changed: [localhost] 2025-03-11 00:01:11.344190 | orchestrator | 2025-03-11 00:01:11.344323 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-03-11 00:01:11.344363 | orchestrator | changed: [localhost] 2025-03-11 00:01:11.344831 | orchestrator | 2025-03-11 00:01:14.087377 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-03-11 00:01:14.087494 | orchestrator | changed: [localhost] 2025-03-11 00:01:16.391901 | orchestrator | 2025-03-11 00:01:16.392686 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-03-11 00:01:16.392745 | orchestrator | changed: [localhost] 2025-03-11 00:01:16.393404 | orchestrator | 2025-03-11 00:01:16.393529 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-03-11 00:01:18.848050 | orchestrator | changed: [localhost] 2025-03-11 00:01:21.209532 | orchestrator | 2025-03-11 00:01:21.209649 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-03-11 00:01:21.209685 | orchestrator | changed: [localhost] 2025-03-11 00:01:21.210312 | orchestrator | 2025-03-11 00:01:21.210349 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-03-11 00:01:23.991849 | orchestrator | changed: [localhost] 2025-03-11 00:01:26.574370 | orchestrator | 2025-03-11 00:01:26.574484 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-03-11 00:01:26.574624 | orchestrator | changed: [localhost] 2025-03-11 00:01:26.575380 | orchestrator | 2025-03-11 00:01:26.575412 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-03-11 00:01:29.569873 | orchestrator | changed: [localhost] 2025-03-11 00:01:29.571794 | orchestrator | 2025-03-11 00:01:29.571858 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:01:29.574057 | orchestrator | 2025-03-11 00:01:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:01:29.574090 | orchestrator | 2025-03-11 00:01:29 | INFO  | Please wait and do not abort execution. 2025-03-11 00:01:29.574114 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:01:29.574583 | orchestrator | 2025-03-11 00:01:29.962299 | orchestrator | + osism netbox connect 1000 --state a 2025-03-11 00:01:31.728847 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task fc0bf259-f43a-4169-9551-6037db8fb5f3 for device testbed-node-7 is running in background 2025-03-11 00:01:31.733682 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 2d3483a2-3120-4600-b8f6-525b017d8d40 for device testbed-node-8 is running in background 2025-03-11 00:01:31.741258 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 26f795ee-ffdb-4c5a-ba70-28c9825eeae0 for device testbed-switch-1 is running in background 2025-03-11 00:01:31.744838 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 24a2918f-b607-41ce-b24c-782ee7229f2a for device testbed-node-9 is running in background 2025-03-11 00:01:31.751187 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task d92bd55e-8623-41b0-a7b3-51427089cf77 for device testbed-node-3 is running in background 2025-03-11 00:01:31.757194 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 95b97734-a123-4e72-9831-8c434ab766a8 for device testbed-node-2 is running in background 2025-03-11 00:01:31.759818 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 65b85a7e-2a99-4f4a-b6ed-a909cf527c2e for device testbed-node-5 is running in background 2025-03-11 00:01:31.764832 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 0408b4ac-1a93-4268-ae88-9c4ddae73cb8 for device testbed-node-4 is running in background 2025-03-11 00:01:31.770241 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task be48828d-a845-4c39-b215-11e0441204ad for device testbed-manager is running in background 2025-03-11 00:01:31.775163 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 47d058ed-8ec2-450f-aa33-dbd243accb8a for device testbed-switch-0 is running in background 2025-03-11 00:01:31.779460 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 6f77e4a0-4044-49be-b1b2-73f4bd30da13 for device testbed-switch-2 is running in background 2025-03-11 00:01:31.783248 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 0c1ac114-1783-48a6-bbfe-79ca382b5f0c for device testbed-node-6 is running in background 2025-03-11 00:01:31.787781 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 472c5c41-7ac8-42ab-8789-7c469fa6eaad for device testbed-node-0 is running in background 2025-03-11 00:01:31.791342 | orchestrator | 2025-03-11 00:01:31 | INFO  | Task 9d074167-756e-4dba-9a33-72328b4cfd72 for device testbed-node-1 is running in background 2025-03-11 00:01:31.791483 | orchestrator | 2025-03-11 00:01:31 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-03-11 00:01:32.075099 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-03-11 00:01:34.143373 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-03-11 00:01:36.240655 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-03-11 00:01:38.483510 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-11 00:01:38.997906 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-11 00:01:39.009380 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-11 00:01:39.009420 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-11 00:01:39.009436 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-03-11 00:01:39.009452 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 3 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-03-11 00:01:39.009466 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009480 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009495 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009510 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2025-03-11 00:01:39.009524 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009538 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 3 minutes ago Up 3 minutes (healthy) 3306/tcp 2025-03-11 00:01:39.009552 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism-netbox:0.20241219.2 "/usr/bin/tini -- os…" netbox 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009566 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009580 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 3 minutes ago Up 3 minutes (healthy) 6379/tcp 2025-03-11 00:01:39.009605 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009620 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-11 00:01:39.009660 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2025-03-11 00:01:39.009674 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient 3 minutes ago Up 3 minutes (healthy) 2025-03-11 00:01:39.009697 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-11 00:01:39.296285 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-11 00:01:39.304649 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 11 minutes ago Up 10 minutes (healthy) 2025-03-11 00:01:39.304685 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 11 minutes ago Up 5 minutes (healthy) 2025-03-11 00:01:39.304700 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 11 minutes ago Up 10 minutes (healthy) 5432/tcp 2025-03-11 00:01:39.304716 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 11 minutes ago Up 10 minutes (healthy) 6379/tcp 2025-03-11 00:01:39.304737 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-11 00:01:39.365162 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-11 00:01:39.368263 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-11 00:01:39.368301 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-11 00:01:41.591433 | orchestrator | 2025-03-11 00:01:41 | INFO  | Task a6d9a398-3ed5-4fe8-8a00-801a2b615d52 (resolvconf) was prepared for execution. 2025-03-11 00:01:45.287378 | orchestrator | 2025-03-11 00:01:41 | INFO  | It takes a moment until task a6d9a398-3ed5-4fe8-8a00-801a2b615d52 (resolvconf) has been started and output is visible here. 2025-03-11 00:01:45.287520 | orchestrator | 2025-03-11 00:01:45.288127 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-11 00:01:45.288534 | orchestrator | 2025-03-11 00:01:45.288786 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-11 00:01:45.289284 | orchestrator | Tuesday 11 March 2025 00:01:45 +0000 (0:00:00.098) 0:00:00.098 ********* 2025-03-11 00:01:50.759835 | orchestrator | ok: [testbed-manager] 2025-03-11 00:01:50.760169 | orchestrator | 2025-03-11 00:01:50.760202 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-11 00:01:50.761248 | orchestrator | Tuesday 11 March 2025 00:01:50 +0000 (0:00:05.468) 0:00:05.567 ********* 2025-03-11 00:01:50.833020 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:01:50.833427 | orchestrator | 2025-03-11 00:01:50.833678 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-11 00:01:50.836042 | orchestrator | Tuesday 11 March 2025 00:01:50 +0000 (0:00:00.077) 0:00:05.644 ********* 2025-03-11 00:01:50.946477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-11 00:01:50.946604 | orchestrator | 2025-03-11 00:01:50.946626 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-11 00:01:50.947008 | orchestrator | Tuesday 11 March 2025 00:01:50 +0000 (0:00:00.116) 0:00:05.760 ********* 2025-03-11 00:01:51.069663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 00:01:51.070117 | orchestrator | 2025-03-11 00:01:51.070156 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-11 00:01:51.070178 | orchestrator | Tuesday 11 March 2025 00:01:51 +0000 (0:00:00.122) 0:00:05.883 ********* 2025-03-11 00:01:52.416117 | orchestrator | ok: [testbed-manager] 2025-03-11 00:01:52.417371 | orchestrator | 2025-03-11 00:01:52.417418 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-11 00:01:52.473885 | orchestrator | Tuesday 11 March 2025 00:01:52 +0000 (0:00:01.341) 0:00:07.224 ********* 2025-03-11 00:01:52.473921 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:01:52.474720 | orchestrator | 2025-03-11 00:01:52.475487 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-11 00:01:52.478101 | orchestrator | Tuesday 11 March 2025 00:01:52 +0000 (0:00:00.062) 0:00:07.287 ********* 2025-03-11 00:01:53.056318 | orchestrator | ok: [testbed-manager] 2025-03-11 00:01:53.139224 | orchestrator | 2025-03-11 00:01:53.139360 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-11 00:01:53.139379 | orchestrator | Tuesday 11 March 2025 00:01:53 +0000 (0:00:00.577) 0:00:07.865 ********* 2025-03-11 00:01:53.139401 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:01:53.140085 | orchestrator | 2025-03-11 00:01:53.140122 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-11 00:01:53.764840 | orchestrator | Tuesday 11 March 2025 00:01:53 +0000 (0:00:00.086) 0:00:07.952 ********* 2025-03-11 00:01:53.765016 | orchestrator | changed: [testbed-manager] 2025-03-11 00:01:53.767072 | orchestrator | 2025-03-11 00:01:53.767110 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-11 00:01:53.769550 | orchestrator | Tuesday 11 March 2025 00:01:53 +0000 (0:00:00.625) 0:00:08.577 ********* 2025-03-11 00:01:55.090675 | orchestrator | changed: [testbed-manager] 2025-03-11 00:01:55.091012 | orchestrator | 2025-03-11 00:01:55.091049 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-11 00:01:55.091072 | orchestrator | Tuesday 11 March 2025 00:01:55 +0000 (0:00:01.324) 0:00:09.902 ********* 2025-03-11 00:01:56.268521 | orchestrator | ok: [testbed-manager] 2025-03-11 00:01:56.269389 | orchestrator | 2025-03-11 00:01:56.269656 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-11 00:01:56.269940 | orchestrator | Tuesday 11 March 2025 00:01:56 +0000 (0:00:01.178) 0:00:11.080 ********* 2025-03-11 00:01:56.355447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-11 00:01:56.355579 | orchestrator | 2025-03-11 00:01:56.356178 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-11 00:01:56.359099 | orchestrator | Tuesday 11 March 2025 00:01:56 +0000 (0:00:00.085) 0:00:11.166 ********* 2025-03-11 00:01:57.910787 | orchestrator | changed: [testbed-manager] 2025-03-11 00:01:57.911345 | orchestrator | 2025-03-11 00:01:57.911520 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:01:57.911553 | orchestrator | 2025-03-11 00:01:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:01:57.912231 | orchestrator | 2025-03-11 00:01:57 | INFO  | Please wait and do not abort execution. 2025-03-11 00:01:57.912263 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:01:57.912466 | orchestrator | 2025-03-11 00:01:57.913222 | orchestrator | Tuesday 11 March 2025 00:01:57 +0000 (0:00:01.558) 0:00:12.724 ********* 2025-03-11 00:01:57.914384 | orchestrator | =============================================================================== 2025-03-11 00:01:57.914417 | orchestrator | Gathering Facts --------------------------------------------------------- 5.47s 2025-03-11 00:01:57.915256 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.56s 2025-03-11 00:01:57.916290 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.34s 2025-03-11 00:01:57.916315 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.32s 2025-03-11 00:01:57.916334 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.18s 2025-03-11 00:01:57.916614 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2025-03-11 00:01:57.918149 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.58s 2025-03-11 00:01:57.919448 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.12s 2025-03-11 00:01:57.919475 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.12s 2025-03-11 00:01:57.919489 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-03-11 00:01:57.919504 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-03-11 00:01:57.919518 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-03-11 00:01:57.919536 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-03-11 00:01:58.513444 | orchestrator | + osism apply sshconfig 2025-03-11 00:02:00.502559 | orchestrator | 2025-03-11 00:02:00 | INFO  | Task 3b0c5d03-0338-4504-b693-9669c57204fc (sshconfig) was prepared for execution. 2025-03-11 00:02:04.226735 | orchestrator | 2025-03-11 00:02:00 | INFO  | It takes a moment until task 3b0c5d03-0338-4504-b693-9669c57204fc (sshconfig) has been started and output is visible here. 2025-03-11 00:02:04.226833 | orchestrator | 2025-03-11 00:02:04.227340 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-11 00:02:04.227377 | orchestrator | 2025-03-11 00:02:04.228107 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-11 00:02:04.228257 | orchestrator | Tuesday 11 March 2025 00:02:04 +0000 (0:00:00.151) 0:00:00.151 ********* 2025-03-11 00:02:04.948444 | orchestrator | ok: [testbed-manager] 2025-03-11 00:02:05.596034 | orchestrator | 2025-03-11 00:02:05.596157 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-11 00:02:05.596177 | orchestrator | Tuesday 11 March 2025 00:02:04 +0000 (0:00:00.725) 0:00:00.876 ********* 2025-03-11 00:02:05.596210 | orchestrator | changed: [testbed-manager] 2025-03-11 00:02:05.597368 | orchestrator | 2025-03-11 00:02:05.597708 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-11 00:02:05.597881 | orchestrator | Tuesday 11 March 2025 00:02:05 +0000 (0:00:00.651) 0:00:01.528 ********* 2025-03-11 00:02:12.525453 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-11 00:02:12.526775 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-11 00:02:12.526833 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-11 00:02:12.527227 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-11 00:02:12.530447 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-11 00:02:12.535242 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-11 00:02:12.536008 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-11 00:02:12.536036 | orchestrator | 2025-03-11 00:02:12.536054 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-11 00:02:12.536077 | orchestrator | Tuesday 11 March 2025 00:02:12 +0000 (0:00:06.926) 0:00:08.455 ********* 2025-03-11 00:02:12.616992 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:02:12.617239 | orchestrator | 2025-03-11 00:02:12.620183 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-11 00:02:12.626475 | orchestrator | Tuesday 11 March 2025 00:02:12 +0000 (0:00:00.092) 0:00:08.547 ********* 2025-03-11 00:02:13.560011 | orchestrator | changed: [testbed-manager] 2025-03-11 00:02:13.560216 | orchestrator | 2025-03-11 00:02:13.560236 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:02:13.560251 | orchestrator | 2025-03-11 00:02:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:02:13.560270 | orchestrator | 2025-03-11 00:02:13 | INFO  | Please wait and do not abort execution. 2025-03-11 00:02:13.562265 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:02:13.562567 | orchestrator | 2025-03-11 00:02:13.563057 | orchestrator | Tuesday 11 March 2025 00:02:13 +0000 (0:00:00.947) 0:00:09.494 ********* 2025-03-11 00:02:13.563347 | orchestrator | =============================================================================== 2025-03-11 00:02:13.563372 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.93s 2025-03-11 00:02:13.563792 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.95s 2025-03-11 00:02:13.564078 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.73s 2025-03-11 00:02:13.564415 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.65s 2025-03-11 00:02:13.564816 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-03-11 00:02:14.257865 | orchestrator | + osism apply known-hosts 2025-03-11 00:02:16.185026 | orchestrator | 2025-03-11 00:02:16 | INFO  | Task 07f88b3d-3e10-4cb7-b33e-f8e50d4af1e3 (known-hosts) was prepared for execution. 2025-03-11 00:02:19.836950 | orchestrator | 2025-03-11 00:02:16 | INFO  | It takes a moment until task 07f88b3d-3e10-4cb7-b33e-f8e50d4af1e3 (known-hosts) has been started and output is visible here. 2025-03-11 00:02:19.837138 | orchestrator | 2025-03-11 00:02:19.837829 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-11 00:02:19.841639 | orchestrator | 2025-03-11 00:02:19.842082 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-11 00:02:19.842130 | orchestrator | Tuesday 11 March 2025 00:02:19 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-03-11 00:02:26.430148 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-11 00:02:26.431158 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-11 00:02:26.431203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-11 00:02:26.433221 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-11 00:02:26.433249 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-11 00:02:26.434116 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-11 00:02:26.435085 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-11 00:02:26.435500 | orchestrator | 2025-03-11 00:02:26.436130 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-11 00:02:26.436755 | orchestrator | Tuesday 11 March 2025 00:02:26 +0000 (0:00:06.603) 0:00:06.733 ********* 2025-03-11 00:02:26.606795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-11 00:02:26.609337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-11 00:02:26.609692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-11 00:02:26.609727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-11 00:02:26.610372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-11 00:02:26.610770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-11 00:02:26.611576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-11 00:02:26.612278 | orchestrator | 2025-03-11 00:02:26.612500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:26.612859 | orchestrator | Tuesday 11 March 2025 00:02:26 +0000 (0:00:00.179) 0:00:06.913 ********* 2025-03-11 00:02:28.011490 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxtunCccrC9dkfnWm+HumhINVD35hJKtlseqsXm6fAdPz/jwoiY17FkJDXxswf/qrQ+QbPHRce5jXnsYm7OR06HbJqMYGOfwyydY5GdF1JZITo+NMoWlQJgydNO2F36cQQjrZk4ekYyeDrujZvPoYEvjzaFUE4FeD7dRb9C+6DdTxhCCTGH2iWjDQaPsOIemLTHSUhzgats1sZ2tjiecsgKV71toDiPQcsIFmZZtrfDJOnM780woumF6eRE4WXYeHgvzgEqiKgwg9j1Tob6HuqcFFMNKEWOmsWlWVqMcYJ6lmYaXGgr80c/nQE5mN8llOr+BkWsLCygE6yjYvfx9uyFtSYgxUEKyjErhPGwoGi2IJ/dKwtBLWbS5C+Ndp74qNbu8EJFAMGWiUumFWNmL0liKDHJwGrV/z4ye/DWIk+egaTErEPUcwEfmwQ1/WhiTmyAULUHRuB4ZMfVj3ezSBiBX6PtGhfcIuewYuMGQoBJ57Ii8A7GnxV6vNXCu2MQOM=) 2025-03-11 00:02:28.011705 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUAtdA+rPiTfwlKJmKAqq1l63A7TdJ1kC5bk0/gaZyOeq5XsUeIiB4NLr3NtOrsGA03kqvtEYLIbtVEDGTz4yw=) 2025-03-11 00:02:28.011752 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDXU8uH5wS3Cd/qYWTFc3go0ppal/gs2uyqEuIKgp05+) 2025-03-11 00:02:28.012786 | orchestrator | 2025-03-11 00:02:28.013015 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:28.013768 | orchestrator | Tuesday 11 March 2025 00:02:28 +0000 (0:00:01.402) 0:00:08.316 ********* 2025-03-11 00:02:29.214566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDczFxg3c3vZHFKol1CFKMoRv5eeEMNlwNJPTytfc1s5Xw2rFPeIi2oTnIDgoqXfRjGaHB18T9Sd411/OL11BWqw8Fo6GNQ2Ybs9o5SVc9TbY30nWpsyU/S4tn5y/r1FC44YDfvUJmz6Nt+TZIoWtj9yVymltVqyH8RPHmgb2u2gyNpdzlIFfLRc0T4/xZdHIxNkeVMHowC6dfUP8kZNPTysR9BHEGRpSKYEJF19cxWcc+cUp+H0k2zM4yQ6knav7CKjwvMBDrIFEAGiPH22ALMnfiKjolnheT5/0XUpWVd1w8HyhgaNr+V72qHWWKD8E2e3WnnF9lSg/rsk0Jw989jFQHEXf5KNSh+dDpfid4qnq3PDZ0oX1nNqpVpdeHDX998lErKUOC2SLKvQ4FUzG26r26nZb2ZIaEfoifW1Y+zQdAjLKwXDqjoRJbY5CsVQBBG5vRM1jBXn3eMTr8wnIrfhws/GsQKWrqCLn6k5wf5MIMBhgdjKiAhydKU6938aE=) 2025-03-11 00:02:29.214775 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFXLaVSOo1rg01cZZ5pDelymPU12DkfQXu4GRhsm1k8l) 2025-03-11 00:02:29.215696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD6PG+cRJgHn6Se6oHQnVEh8tzeQ3fS9wJSt2uptale0hRImF/RUx9KxsKHeb8UGKXKAqYrhkzx/6IdQCGUvDZs=) 2025-03-11 00:02:29.216089 | orchestrator | 2025-03-11 00:02:29.217001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:29.217532 | orchestrator | Tuesday 11 March 2025 00:02:29 +0000 (0:00:01.203) 0:00:09.520 ********* 2025-03-11 00:02:30.441302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDn3D9zHsTOEcuqZ3gK7vc0tZ+wCux7xNUoZJALc4L+scS8JP40o8ury+s6y+rHz5m2MtDYJD0l0E0TZCNnWMYaBO5Db0XtHctaE5AHn5OuWyn8wgReiQ1Nlpm/QaMKbLLDlI8YOpJCVs9pKM50/YYUG6RhNNTqlGVBTQPUFVuIIx/Nb5ShEXgkCMwrQ21pbXfTinBL3jBb0c/kRia270jdaBpONWfJcOtM+1t/kZI/bCLP7HWWNl9iA7/oYmLWQCPbpmRBh/j/LnYCPhrBa+YF5DpBLoW1UlbFWVM315n/fIPO2V1312Tq5FUYqkKgGhqkLaCoGnuwPTNzNvFYJydjUayEQD13B3CRbWT0rpwnT8t4j0tpWSKNqqSbSgoXuIzK6Y+08BnWdabiQyD1cXskU0l5HMlYPmrDRuDPTEeqs2+mRrPPZevr+PlBRsLt2Xe6iprYAYMdQXOJ6ULl0hke2bRu/WlLC2KL/8y4e+h8oJj+nzR66DBklLcVmGBI6yk=) 2025-03-11 00:02:30.442490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPmMCa9/dfgbO3cyoZkpRBIP76a4ZPVh9yf2DJwTR+aO1HCP7oCCO8g4ktaBhWeAfXUIJpN0QB7E9s2LWOEL+Gw=) 2025-03-11 00:02:30.442783 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICARLjVJ2AQBVbjlCZWjyF8+dDzA5FN3qhAwjaXOgbEw) 2025-03-11 00:02:30.443601 | orchestrator | 2025-03-11 00:02:30.444465 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:30.444984 | orchestrator | Tuesday 11 March 2025 00:02:30 +0000 (0:00:01.224) 0:00:10.744 ********* 2025-03-11 00:02:31.584706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJoqfF0Qz+myco4OcHKMNTK1uOXTHZa50WWSVBe7+bMsbTMwYforresI2SbH6pp6BZWYPDiQ0Vr1I811lpu3fltii2AJ6xzQlFNB8zEF1Nwh2NfU1o8bxHLenX89hzuWmsVmSBlrOaBBv3B2VsMB+Jc8BrL6xWd8OuMHeokmyRF8vRoAmMqLZjZiJf9nm/0cvDGz9z8lhlOLh2Y80As/fWjVwt7L/DOoPhtv2G2/sqCkrIzevvUxl1uc3FSrWc3d4xYRAIw7X9Feo9Aj9ZbumQ0hGs5KGIsICX68e4HyT5JJYoNVXuuUesp5Ro0sJrL0bVswcmTqHEtGsyP9mdsRMo1NMefciWPBWe1zI23ys8CMME5IPC1AYvQjHnxwRViapZegqvSyZTpjKtwLEjObmB2aEZUt+xmN/7xcJaEjegonoNbr0p0oXg9W9Wut9X42L9sNTDmG5gBSMAojktb91zT50vhXhHhjr1wuxi1hwHFZahutvVAI5CsY8663NHheU=) 2025-03-11 00:02:31.584940 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCn4+Zp1ODQeOnY+RUEnnyxbSlvTSFtGpqzpXUFHuN2WAf68htESZfMRT2tO8Pkil48gL30XH3nmxO1H3OmDd1c=) 2025-03-11 00:02:31.585662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPptlQB+grhdb0L8yxszyPPx3ai78E8jAx9fPzovYpDG) 2025-03-11 00:02:31.587211 | orchestrator | 2025-03-11 00:02:31.587358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:31.588222 | orchestrator | Tuesday 11 March 2025 00:02:31 +0000 (0:00:01.146) 0:00:11.890 ********* 2025-03-11 00:02:32.863884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6WUEG2GiZnVIC6l/fAw5cq+1wN/zsYZ1qfIAZhozH+JvOBej5/MU6LBmN6lf600Jn+KZeEBpsqMTWEJKFG7bmJanaVP4KPnFuoZvW7wLVnl/sZ/taTQAFlZKwrww123C5iB7Zn0QPtcpn00CHluzjI/uABgI2gk7I9q8fM4NbY4OOzypIYRi7xCkUwdCQZrWI2FH/y6EO37ahhO44WSe2o2jYaSihBzFOg5NCQ+2hQoCq40+vlvFpP8JsDeZmpBf2Ijn4JOZuKPSiyi7DPTUb6bs8F05WEV0uR9v61x2RLaUSF+RRhl8W+ucvwDNx2Z0XUEv+jD6FSVijAiCq+LnnZGtSaxszo2B2JDk8+OSuwY7X+IERB/lJS40aYgAp0Ee/bL7EN6bESDAhG65jPxvqn5b1LjzBfOXWYFH+9RdHpz18+zmenIXdK28gnU+DqMKP8K9CfJ3bl8xkB2LHsVP6qZvpNdr559jirUJmR2284MdnirLyr6LLtEA/VgQfAm0=) 2025-03-11 00:02:32.864526 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQmsoLKrG++EWCHoGi4DXcG9Qu+Heq8Qc2Q1A4A9hjfzrK5W2UfrhrQYgQSCSFgMEIPfXaq1EZavBjHcdJnPnE=) 2025-03-11 00:02:32.864762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYCafKZpaGsc3p5yRbaorpaCCiJQ+uDueK9dErtcRgK) 2025-03-11 00:02:32.866424 | orchestrator | 2025-03-11 00:02:32.867193 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:32.867540 | orchestrator | Tuesday 11 March 2025 00:02:32 +0000 (0:00:01.277) 0:00:13.167 ********* 2025-03-11 00:02:34.031926 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNnGuOJ91Gvdq6j4h/eYsEDXbGLsBbtkxVG4+5RlQl+) 2025-03-11 00:02:34.032704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsTMXTOkAcRBG/Aoe0LWpDer/CuxRrCtLldAnL3B95b5RTmJmFm/NgmnX94d+6GDfeJEloAHp86zuB615eJ87s/+AM3RXmmX4FRfhu9DMkk4BzS8c4saZ79iQEcFSErfSYYJTUBPIOqzpH4TCy3686LL1/r4j7rDJRmIxvSNzBA3/HvDfiOUj3VzCe2+HYCFilR1DuPCytopWsFmpptbr5A2g4eiQGBMrn34RMf/ZjxQLdhnC9WwUqiw74g9w2dKRLmS9Cd1g28Kdse+BF49QDspHamJNjrOYsgZ9GCn60e1yAfmBO2glwcjBD5kIQrXPW7LU2ZTfzEF5PObNKjK8WT6LxPQ/wBE2Lbf2QQUZDwpPGVkQ8dFq9cGGsEy38zjF1Z7vOtKntU6+Vexoh8pQpLfb8TzhvbI1nn95wSpGjK6kfJL1U7ByBAOKNqWO8qcQUFD6VF/7DCdcxJ5XYmBwl25fVFGk8CeiDUTGJSCMXe1xYSHi9EHQjETPJVjz7Ddk=) 2025-03-11 00:02:34.032757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNQEWmRh34WqcbfC/1j/5OWQsCv1HAbLYnGV/avelzl22DwwhSVE3bbEDjIWJx0Eyd+5bzb/6aoG07yAGJ1y3k=) 2025-03-11 00:02:34.034193 | orchestrator | 2025-03-11 00:02:34.034532 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:34.037254 | orchestrator | Tuesday 11 March 2025 00:02:34 +0000 (0:00:01.166) 0:00:14.334 ********* 2025-03-11 00:02:35.189870 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCE9TKWvlXHtjzVkqhTukSMzmhqpK7PQ2LhHn2HooZiv06aoPD4ZotXA6t7PGwFeW22gFIlErm7rJ2QVObMG2qAWUTgWVA5ueF2EiETK1c6fbx7bZfZ6Ub+mfPom+nffCGoLw8u3Rn0bYbmHSUQ7z3QvW4dbT6elM9pBycNscFlMEJAdm1mSR9ZjTO5YAHbSgfsb+BXF8rVjB9E/3cfTaGVqk/HB37WRBvHm90P030Q4cGKISzMJHXM1de+dPeYKU9GpY9+/zwTRCTxIUBzPb8n2jS7iQQ0hDgPtnyJT1Wky30QruCJqaVMvOl2KPVYn3zv9vztC1jeKq9BGcYY8SR5LBAuvz11Y2PhNvGouV5thRWY/8exB7ay/E4w5vmg8vWavPDMoc/mZMnLphHBahqtKJxD01+bvMkpL/ZzjKqaxFzlsQwJYfQnbPQLjud0GnNXt5l2knvCvlNO3Oq/R5T2rX5XL6upl7OvaVq2PgG7/YL5VmrNM8G/K5CD0kk4Nm0=) 2025-03-11 00:02:35.190175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLZRAKsDL353iAC/SxEj7Ao6RYcJhjbj9qJpvohHugfw4SSDi518UC+l5Ye2rYg0kMpPxabnyMV14dUIH4pTOBk=) 2025-03-11 00:02:35.190939 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFp5NvrMvJV/JQ29+nUHm7Wu+o6GPfwPN8OyaTj52qw+) 2025-03-11 00:02:35.192168 | orchestrator | 2025-03-11 00:02:35.192815 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-11 00:02:35.193313 | orchestrator | Tuesday 11 March 2025 00:02:35 +0000 (0:00:01.160) 0:00:15.494 ********* 2025-03-11 00:02:40.874641 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-11 00:02:40.876063 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-11 00:02:40.876133 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-11 00:02:40.876157 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-11 00:02:40.878295 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-11 00:02:40.879083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-11 00:02:40.879434 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-11 00:02:40.880355 | orchestrator | 2025-03-11 00:02:40.881304 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-11 00:02:40.881673 | orchestrator | Tuesday 11 March 2025 00:02:40 +0000 (0:00:05.684) 0:00:21.179 ********* 2025-03-11 00:02:41.069342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-11 00:02:41.070108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-11 00:02:41.070870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-11 00:02:41.072503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-11 00:02:41.073004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-11 00:02:41.073356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-11 00:02:41.073643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-11 00:02:41.074790 | orchestrator | 2025-03-11 00:02:41.078160 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:41.078585 | orchestrator | Tuesday 11 March 2025 00:02:41 +0000 (0:00:00.196) 0:00:21.375 ********* 2025-03-11 00:02:42.159801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxtunCccrC9dkfnWm+HumhINVD35hJKtlseqsXm6fAdPz/jwoiY17FkJDXxswf/qrQ+QbPHRce5jXnsYm7OR06HbJqMYGOfwyydY5GdF1JZITo+NMoWlQJgydNO2F36cQQjrZk4ekYyeDrujZvPoYEvjzaFUE4FeD7dRb9C+6DdTxhCCTGH2iWjDQaPsOIemLTHSUhzgats1sZ2tjiecsgKV71toDiPQcsIFmZZtrfDJOnM780woumF6eRE4WXYeHgvzgEqiKgwg9j1Tob6HuqcFFMNKEWOmsWlWVqMcYJ6lmYaXGgr80c/nQE5mN8llOr+BkWsLCygE6yjYvfx9uyFtSYgxUEKyjErhPGwoGi2IJ/dKwtBLWbS5C+Ndp74qNbu8EJFAMGWiUumFWNmL0liKDHJwGrV/z4ye/DWIk+egaTErEPUcwEfmwQ1/WhiTmyAULUHRuB4ZMfVj3ezSBiBX6PtGhfcIuewYuMGQoBJ57Ii8A7GnxV6vNXCu2MQOM=) 2025-03-11 00:02:42.160380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUAtdA+rPiTfwlKJmKAqq1l63A7TdJ1kC5bk0/gaZyOeq5XsUeIiB4NLr3NtOrsGA03kqvtEYLIbtVEDGTz4yw=) 2025-03-11 00:02:42.160777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDXU8uH5wS3Cd/qYWTFc3go0ppal/gs2uyqEuIKgp05+) 2025-03-11 00:02:42.161997 | orchestrator | 2025-03-11 00:02:42.163194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:42.164051 | orchestrator | Tuesday 11 March 2025 00:02:42 +0000 (0:00:01.089) 0:00:22.465 ********* 2025-03-11 00:02:43.363261 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDczFxg3c3vZHFKol1CFKMoRv5eeEMNlwNJPTytfc1s5Xw2rFPeIi2oTnIDgoqXfRjGaHB18T9Sd411/OL11BWqw8Fo6GNQ2Ybs9o5SVc9TbY30nWpsyU/S4tn5y/r1FC44YDfvUJmz6Nt+TZIoWtj9yVymltVqyH8RPHmgb2u2gyNpdzlIFfLRc0T4/xZdHIxNkeVMHowC6dfUP8kZNPTysR9BHEGRpSKYEJF19cxWcc+cUp+H0k2zM4yQ6knav7CKjwvMBDrIFEAGiPH22ALMnfiKjolnheT5/0XUpWVd1w8HyhgaNr+V72qHWWKD8E2e3WnnF9lSg/rsk0Jw989jFQHEXf5KNSh+dDpfid4qnq3PDZ0oX1nNqpVpdeHDX998lErKUOC2SLKvQ4FUzG26r26nZb2ZIaEfoifW1Y+zQdAjLKwXDqjoRJbY5CsVQBBG5vRM1jBXn3eMTr8wnIrfhws/GsQKWrqCLn6k5wf5MIMBhgdjKiAhydKU6938aE=) 2025-03-11 00:02:43.363709 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD6PG+cRJgHn6Se6oHQnVEh8tzeQ3fS9wJSt2uptale0hRImF/RUx9KxsKHeb8UGKXKAqYrhkzx/6IdQCGUvDZs=) 2025-03-11 00:02:43.363758 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFXLaVSOo1rg01cZZ5pDelymPU12DkfQXu4GRhsm1k8l) 2025-03-11 00:02:43.366374 | orchestrator | 2025-03-11 00:02:43.367365 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:43.367645 | orchestrator | Tuesday 11 March 2025 00:02:43 +0000 (0:00:01.202) 0:00:23.667 ********* 2025-03-11 00:02:44.541507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPmMCa9/dfgbO3cyoZkpRBIP76a4ZPVh9yf2DJwTR+aO1HCP7oCCO8g4ktaBhWeAfXUIJpN0QB7E9s2LWOEL+Gw=) 2025-03-11 00:02:44.541751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDn3D9zHsTOEcuqZ3gK7vc0tZ+wCux7xNUoZJALc4L+scS8JP40o8ury+s6y+rHz5m2MtDYJD0l0E0TZCNnWMYaBO5Db0XtHctaE5AHn5OuWyn8wgReiQ1Nlpm/QaMKbLLDlI8YOpJCVs9pKM50/YYUG6RhNNTqlGVBTQPUFVuIIx/Nb5ShEXgkCMwrQ21pbXfTinBL3jBb0c/kRia270jdaBpONWfJcOtM+1t/kZI/bCLP7HWWNl9iA7/oYmLWQCPbpmRBh/j/LnYCPhrBa+YF5DpBLoW1UlbFWVM315n/fIPO2V1312Tq5FUYqkKgGhqkLaCoGnuwPTNzNvFYJydjUayEQD13B3CRbWT0rpwnT8t4j0tpWSKNqqSbSgoXuIzK6Y+08BnWdabiQyD1cXskU0l5HMlYPmrDRuDPTEeqs2+mRrPPZevr+PlBRsLt2Xe6iprYAYMdQXOJ6ULl0hke2bRu/WlLC2KL/8y4e+h8oJj+nzR66DBklLcVmGBI6yk=) 2025-03-11 00:02:44.542675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICARLjVJ2AQBVbjlCZWjyF8+dDzA5FN3qhAwjaXOgbEw) 2025-03-11 00:02:44.543284 | orchestrator | 2025-03-11 00:02:44.544231 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:44.544943 | orchestrator | Tuesday 11 March 2025 00:02:44 +0000 (0:00:01.177) 0:00:24.845 ********* 2025-03-11 00:02:45.730756 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPptlQB+grhdb0L8yxszyPPx3ai78E8jAx9fPzovYpDG) 2025-03-11 00:02:45.731116 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJoqfF0Qz+myco4OcHKMNTK1uOXTHZa50WWSVBe7+bMsbTMwYforresI2SbH6pp6BZWYPDiQ0Vr1I811lpu3fltii2AJ6xzQlFNB8zEF1Nwh2NfU1o8bxHLenX89hzuWmsVmSBlrOaBBv3B2VsMB+Jc8BrL6xWd8OuMHeokmyRF8vRoAmMqLZjZiJf9nm/0cvDGz9z8lhlOLh2Y80As/fWjVwt7L/DOoPhtv2G2/sqCkrIzevvUxl1uc3FSrWc3d4xYRAIw7X9Feo9Aj9ZbumQ0hGs5KGIsICX68e4HyT5JJYoNVXuuUesp5Ro0sJrL0bVswcmTqHEtGsyP9mdsRMo1NMefciWPBWe1zI23ys8CMME5IPC1AYvQjHnxwRViapZegqvSyZTpjKtwLEjObmB2aEZUt+xmN/7xcJaEjegonoNbr0p0oXg9W9Wut9X42L9sNTDmG5gBSMAojktb91zT50vhXhHhjr1wuxi1hwHFZahutvVAI5CsY8663NHheU=) 2025-03-11 00:02:45.731161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCn4+Zp1ODQeOnY+RUEnnyxbSlvTSFtGpqzpXUFHuN2WAf68htESZfMRT2tO8Pkil48gL30XH3nmxO1H3OmDd1c=) 2025-03-11 00:02:45.732108 | orchestrator | 2025-03-11 00:02:45.732655 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:45.732920 | orchestrator | Tuesday 11 March 2025 00:02:45 +0000 (0:00:01.191) 0:00:26.036 ********* 2025-03-11 00:02:46.910910 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6WUEG2GiZnVIC6l/fAw5cq+1wN/zsYZ1qfIAZhozH+JvOBej5/MU6LBmN6lf600Jn+KZeEBpsqMTWEJKFG7bmJanaVP4KPnFuoZvW7wLVnl/sZ/taTQAFlZKwrww123C5iB7Zn0QPtcpn00CHluzjI/uABgI2gk7I9q8fM4NbY4OOzypIYRi7xCkUwdCQZrWI2FH/y6EO37ahhO44WSe2o2jYaSihBzFOg5NCQ+2hQoCq40+vlvFpP8JsDeZmpBf2Ijn4JOZuKPSiyi7DPTUb6bs8F05WEV0uR9v61x2RLaUSF+RRhl8W+ucvwDNx2Z0XUEv+jD6FSVijAiCq+LnnZGtSaxszo2B2JDk8+OSuwY7X+IERB/lJS40aYgAp0Ee/bL7EN6bESDAhG65jPxvqn5b1LjzBfOXWYFH+9RdHpz18+zmenIXdK28gnU+DqMKP8K9CfJ3bl8xkB2LHsVP6qZvpNdr559jirUJmR2284MdnirLyr6LLtEA/VgQfAm0=) 2025-03-11 00:02:46.912054 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQmsoLKrG++EWCHoGi4DXcG9Qu+Heq8Qc2Q1A4A9hjfzrK5W2UfrhrQYgQSCSFgMEIPfXaq1EZavBjHcdJnPnE=) 2025-03-11 00:02:46.912258 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYCafKZpaGsc3p5yRbaorpaCCiJQ+uDueK9dErtcRgK) 2025-03-11 00:02:46.913147 | orchestrator | 2025-03-11 00:02:46.913830 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:46.914567 | orchestrator | Tuesday 11 March 2025 00:02:46 +0000 (0:00:01.179) 0:00:27.216 ********* 2025-03-11 00:02:48.118214 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNnGuOJ91Gvdq6j4h/eYsEDXbGLsBbtkxVG4+5RlQl+) 2025-03-11 00:02:48.118790 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsTMXTOkAcRBG/Aoe0LWpDer/CuxRrCtLldAnL3B95b5RTmJmFm/NgmnX94d+6GDfeJEloAHp86zuB615eJ87s/+AM3RXmmX4FRfhu9DMkk4BzS8c4saZ79iQEcFSErfSYYJTUBPIOqzpH4TCy3686LL1/r4j7rDJRmIxvSNzBA3/HvDfiOUj3VzCe2+HYCFilR1DuPCytopWsFmpptbr5A2g4eiQGBMrn34RMf/ZjxQLdhnC9WwUqiw74g9w2dKRLmS9Cd1g28Kdse+BF49QDspHamJNjrOYsgZ9GCn60e1yAfmBO2glwcjBD5kIQrXPW7LU2ZTfzEF5PObNKjK8WT6LxPQ/wBE2Lbf2QQUZDwpPGVkQ8dFq9cGGsEy38zjF1Z7vOtKntU6+Vexoh8pQpLfb8TzhvbI1nn95wSpGjK6kfJL1U7ByBAOKNqWO8qcQUFD6VF/7DCdcxJ5XYmBwl25fVFGk8CeiDUTGJSCMXe1xYSHi9EHQjETPJVjz7Ddk=) 2025-03-11 00:02:48.118945 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNQEWmRh34WqcbfC/1j/5OWQsCv1HAbLYnGV/avelzl22DwwhSVE3bbEDjIWJx0Eyd+5bzb/6aoG07yAGJ1y3k=) 2025-03-11 00:02:48.119024 | orchestrator | 2025-03-11 00:02:48.120291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-11 00:02:48.120345 | orchestrator | Tuesday 11 March 2025 00:02:48 +0000 (0:00:01.206) 0:00:28.422 ********* 2025-03-11 00:02:49.290630 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCE9TKWvlXHtjzVkqhTukSMzmhqpK7PQ2LhHn2HooZiv06aoPD4ZotXA6t7PGwFeW22gFIlErm7rJ2QVObMG2qAWUTgWVA5ueF2EiETK1c6fbx7bZfZ6Ub+mfPom+nffCGoLw8u3Rn0bYbmHSUQ7z3QvW4dbT6elM9pBycNscFlMEJAdm1mSR9ZjTO5YAHbSgfsb+BXF8rVjB9E/3cfTaGVqk/HB37WRBvHm90P030Q4cGKISzMJHXM1de+dPeYKU9GpY9+/zwTRCTxIUBzPb8n2jS7iQQ0hDgPtnyJT1Wky30QruCJqaVMvOl2KPVYn3zv9vztC1jeKq9BGcYY8SR5LBAuvz11Y2PhNvGouV5thRWY/8exB7ay/E4w5vmg8vWavPDMoc/mZMnLphHBahqtKJxD01+bvMkpL/ZzjKqaxFzlsQwJYfQnbPQLjud0GnNXt5l2knvCvlNO3Oq/R5T2rX5XL6upl7OvaVq2PgG7/YL5VmrNM8G/K5CD0kk4Nm0=) 2025-03-11 00:02:49.290944 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLZRAKsDL353iAC/SxEj7Ao6RYcJhjbj9qJpvohHugfw4SSDi518UC+l5Ye2rYg0kMpPxabnyMV14dUIH4pTOBk=) 2025-03-11 00:02:49.291028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFp5NvrMvJV/JQ29+nUHm7Wu+o6GPfwPN8OyaTj52qw+) 2025-03-11 00:02:49.293022 | orchestrator | 2025-03-11 00:02:49.293651 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-11 00:02:49.294438 | orchestrator | Tuesday 11 March 2025 00:02:49 +0000 (0:00:01.174) 0:00:29.596 ********* 2025-03-11 00:02:49.502135 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-11 00:02:49.502628 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-11 00:02:49.502771 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-11 00:02:49.502791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-11 00:02:49.502806 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-11 00:02:49.502820 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-11 00:02:49.502868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-11 00:02:49.502940 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:02:49.503365 | orchestrator | 2025-03-11 00:02:49.503911 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-11 00:02:49.504318 | orchestrator | Tuesday 11 March 2025 00:02:49 +0000 (0:00:00.208) 0:00:29.805 ********* 2025-03-11 00:02:49.562919 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:02:49.563945 | orchestrator | 2025-03-11 00:02:49.564830 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-11 00:02:49.566007 | orchestrator | Tuesday 11 March 2025 00:02:49 +0000 (0:00:00.062) 0:00:29.868 ********* 2025-03-11 00:02:49.622308 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:02:49.623928 | orchestrator | 2025-03-11 00:02:49.624863 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-11 00:02:49.625728 | orchestrator | Tuesday 11 March 2025 00:02:49 +0000 (0:00:00.059) 0:00:29.927 ********* 2025-03-11 00:02:50.366609 | orchestrator | changed: [testbed-manager] 2025-03-11 00:02:50.367521 | orchestrator | 2025-03-11 00:02:50.368666 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:02:50.368694 | orchestrator | 2025-03-11 00:02:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:02:50.368710 | orchestrator | 2025-03-11 00:02:50 | INFO  | Please wait and do not abort execution. 2025-03-11 00:02:50.368731 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:02:50.369582 | orchestrator | 2025-03-11 00:02:50.370253 | orchestrator | Tuesday 11 March 2025 00:02:50 +0000 (0:00:00.745) 0:00:30.672 ********* 2025-03-11 00:02:50.370825 | orchestrator | =============================================================================== 2025-03-11 00:02:50.371729 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.60s 2025-03-11 00:02:50.372547 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.68s 2025-03-11 00:02:50.373121 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.40s 2025-03-11 00:02:50.373517 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2025-03-11 00:02:50.373994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-03-11 00:02:50.374232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-11 00:02:50.374803 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-11 00:02:50.375364 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-11 00:02:50.376116 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-03-11 00:02:50.376600 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-11 00:02:50.377054 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-11 00:02:50.377348 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-11 00:02:50.377730 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-11 00:02:50.378129 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-11 00:02:50.378490 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-11 00:02:50.379003 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-03-11 00:02:50.379391 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2025-03-11 00:02:50.379891 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2025-03-11 00:02:50.380607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2025-03-11 00:02:50.381019 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-03-11 00:02:50.961635 | orchestrator | + osism apply squid 2025-03-11 00:02:52.591776 | orchestrator | 2025-03-11 00:02:52 | INFO  | Task 747e94ef-f00f-42ee-a694-3abab7fbfac4 (squid) was prepared for execution. 2025-03-11 00:02:56.447859 | orchestrator | 2025-03-11 00:02:52 | INFO  | It takes a moment until task 747e94ef-f00f-42ee-a694-3abab7fbfac4 (squid) has been started and output is visible here. 2025-03-11 00:02:56.448059 | orchestrator | 2025-03-11 00:02:56.448752 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-11 00:02:56.448784 | orchestrator | 2025-03-11 00:02:56.448825 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-11 00:02:56.449301 | orchestrator | Tuesday 11 March 2025 00:02:56 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-03-11 00:02:56.561333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 00:02:56.561908 | orchestrator | 2025-03-11 00:02:56.562491 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-11 00:02:56.563417 | orchestrator | Tuesday 11 March 2025 00:02:56 +0000 (0:00:00.116) 0:00:00.254 ********* 2025-03-11 00:02:58.232793 | orchestrator | ok: [testbed-manager] 2025-03-11 00:02:58.233262 | orchestrator | 2025-03-11 00:02:58.233747 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-11 00:02:58.234498 | orchestrator | Tuesday 11 March 2025 00:02:58 +0000 (0:00:01.669) 0:00:01.923 ********* 2025-03-11 00:02:59.523915 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-11 00:02:59.524574 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-11 00:02:59.524627 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-11 00:02:59.525406 | orchestrator | 2025-03-11 00:02:59.526095 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-11 00:02:59.526252 | orchestrator | Tuesday 11 March 2025 00:02:59 +0000 (0:00:01.291) 0:00:03.215 ********* 2025-03-11 00:03:00.713208 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-11 00:03:00.713630 | orchestrator | 2025-03-11 00:03:00.713679 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-11 00:03:00.714536 | orchestrator | Tuesday 11 March 2025 00:03:00 +0000 (0:00:01.189) 0:00:04.405 ********* 2025-03-11 00:03:01.084833 | orchestrator | ok: [testbed-manager] 2025-03-11 00:03:01.085593 | orchestrator | 2025-03-11 00:03:01.086791 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-11 00:03:01.087350 | orchestrator | Tuesday 11 March 2025 00:03:01 +0000 (0:00:00.373) 0:00:04.778 ********* 2025-03-11 00:03:02.081437 | orchestrator | changed: [testbed-manager] 2025-03-11 00:03:02.081895 | orchestrator | 2025-03-11 00:03:02.081941 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-11 00:03:02.082812 | orchestrator | Tuesday 11 March 2025 00:03:02 +0000 (0:00:00.994) 0:00:05.773 ********* 2025-03-11 00:03:31.166508 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-11 00:03:43.648053 | orchestrator | ok: [testbed-manager] 2025-03-11 00:03:43.648202 | orchestrator | 2025-03-11 00:03:43.648224 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-11 00:03:43.648241 | orchestrator | Tuesday 11 March 2025 00:03:31 +0000 (0:00:29.078) 0:00:34.851 ********* 2025-03-11 00:03:43.648272 | orchestrator | changed: [testbed-manager] 2025-03-11 00:03:43.649216 | orchestrator | 2025-03-11 00:03:43.649247 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-11 00:03:43.649268 | orchestrator | Tuesday 11 March 2025 00:03:43 +0000 (0:00:12.485) 0:00:47.337 ********* 2025-03-11 00:04:43.729633 | orchestrator | Pausing for 60 seconds 2025-03-11 00:04:43.789275 | orchestrator | changed: [testbed-manager] 2025-03-11 00:04:43.789443 | orchestrator | 2025-03-11 00:04:43.789465 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-11 00:04:43.789480 | orchestrator | Tuesday 11 March 2025 00:04:43 +0000 (0:01:00.080) 0:01:47.417 ********* 2025-03-11 00:04:43.789510 | orchestrator | ok: [testbed-manager] 2025-03-11 00:04:43.789577 | orchestrator | 2025-03-11 00:04:43.790166 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-11 00:04:43.790456 | orchestrator | Tuesday 11 March 2025 00:04:43 +0000 (0:00:00.065) 0:01:47.482 ********* 2025-03-11 00:04:44.486088 | orchestrator | changed: [testbed-manager] 2025-03-11 00:04:44.486504 | orchestrator | 2025-03-11 00:04:44.486538 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:04:44.486662 | orchestrator | 2025-03-11 00:04:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:04:44.487254 | orchestrator | 2025-03-11 00:04:44 | INFO  | Please wait and do not abort execution. 2025-03-11 00:04:44.487283 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:04:44.488305 | orchestrator | 2025-03-11 00:04:44.488816 | orchestrator | Tuesday 11 March 2025 00:04:44 +0000 (0:00:00.696) 0:01:48.179 ********* 2025-03-11 00:04:44.489368 | orchestrator | =============================================================================== 2025-03-11 00:04:44.489627 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-03-11 00:04:44.490262 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 29.08s 2025-03-11 00:04:44.490705 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.49s 2025-03-11 00:04:44.491419 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.67s 2025-03-11 00:04:44.491777 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.29s 2025-03-11 00:04:44.492382 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.19s 2025-03-11 00:04:44.492559 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2025-03-11 00:04:44.492849 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2025-03-11 00:04:44.493264 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-03-11 00:04:44.493360 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2025-03-11 00:04:44.493677 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-03-11 00:04:45.034815 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-11 00:04:45.040643 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-03-11 00:04:45.040693 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-11 00:04:45.096384 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-11 00:04:45.100239 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-11 00:04:45.100269 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-03-11 00:04:45.100293 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-11 00:04:45.105143 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-11 00:04:45.110627 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-11 00:04:46.946830 | orchestrator | 2025-03-11 00:04:46 | INFO  | Task 3e9f23d9-3fd4-4d23-a9ec-77731c4d7686 (operator) was prepared for execution. 2025-03-11 00:04:50.402550 | orchestrator | 2025-03-11 00:04:46 | INFO  | It takes a moment until task 3e9f23d9-3fd4-4d23-a9ec-77731c4d7686 (operator) has been started and output is visible here. 2025-03-11 00:04:50.402726 | orchestrator | 2025-03-11 00:04:50.404406 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-11 00:04:50.404439 | orchestrator | 2025-03-11 00:04:50.404822 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-11 00:04:50.405296 | orchestrator | Tuesday 11 March 2025 00:04:50 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-03-11 00:04:54.125074 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:04:54.125637 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:04:54.126365 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:04:54.128569 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:04:54.129027 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:04:54.133270 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:04:54.134062 | orchestrator | 2025-03-11 00:04:54.134099 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-11 00:04:54.134784 | orchestrator | Tuesday 11 March 2025 00:04:54 +0000 (0:00:03.726) 0:00:03.829 ********* 2025-03-11 00:04:55.038773 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:04:55.039014 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:04:55.042336 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:04:55.042729 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:04:55.042765 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:04:55.043192 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:04:55.046873 | orchestrator | 2025-03-11 00:04:55.049408 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-11 00:04:55.120979 | orchestrator | 2025-03-11 00:04:55.121027 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-11 00:04:55.121043 | orchestrator | Tuesday 11 March 2025 00:04:55 +0000 (0:00:00.912) 0:00:04.742 ********* 2025-03-11 00:04:55.121066 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:04:55.150477 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:04:55.184160 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:04:55.234870 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:04:55.235676 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:04:55.236253 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:04:55.236633 | orchestrator | 2025-03-11 00:04:55.237110 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-11 00:04:55.240865 | orchestrator | Tuesday 11 March 2025 00:04:55 +0000 (0:00:00.197) 0:00:04.939 ********* 2025-03-11 00:04:55.316482 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:04:55.342430 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:04:55.369849 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:04:55.428491 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:04:55.428812 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:04:55.429361 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:04:55.429713 | orchestrator | 2025-03-11 00:04:55.430274 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-11 00:04:55.434962 | orchestrator | Tuesday 11 March 2025 00:04:55 +0000 (0:00:00.190) 0:00:05.129 ********* 2025-03-11 00:04:56.136477 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:04:56.137065 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:04:56.137982 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:04:56.138283 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:04:56.139220 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:04:56.139487 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:04:56.139525 | orchestrator | 2025-03-11 00:04:56.143032 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-11 00:04:56.146486 | orchestrator | Tuesday 11 March 2025 00:04:56 +0000 (0:00:00.707) 0:00:05.837 ********* 2025-03-11 00:04:57.003545 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:04:57.004276 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:04:57.004492 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:04:57.005266 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:04:57.009903 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:04:57.011702 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:04:57.012397 | orchestrator | 2025-03-11 00:04:57.015760 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-11 00:04:57.016234 | orchestrator | Tuesday 11 March 2025 00:04:57 +0000 (0:00:00.869) 0:00:06.706 ********* 2025-03-11 00:04:58.179128 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-11 00:04:58.179326 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-11 00:04:58.179355 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-11 00:04:58.179671 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-11 00:04:58.180101 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-11 00:04:58.180704 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-11 00:04:58.185959 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-11 00:04:58.186389 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-11 00:04:58.186420 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-11 00:04:58.186698 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-11 00:04:58.187096 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-11 00:04:58.189739 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-11 00:04:58.191449 | orchestrator | 2025-03-11 00:04:58.191480 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-11 00:04:58.193189 | orchestrator | Tuesday 11 March 2025 00:04:58 +0000 (0:00:01.174) 0:00:07.881 ********* 2025-03-11 00:04:59.608649 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:04:59.613565 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:04:59.613844 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:04:59.613876 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:04:59.613891 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:04:59.613905 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:04:59.613919 | orchestrator | 2025-03-11 00:04:59.613979 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-11 00:04:59.614002 | orchestrator | Tuesday 11 March 2025 00:04:59 +0000 (0:00:01.429) 0:00:09.310 ********* 2025-03-11 00:05:00.860747 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-11 00:05:00.861973 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-11 00:05:01.056140 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-11 00:05:01.056291 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.057097 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.057149 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.058150 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.058878 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.059852 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-11 00:05:01.061392 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.061807 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.061848 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.062790 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.063529 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.064298 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-11 00:05:01.065334 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.066012 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.067008 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.068101 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.068774 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.068793 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-11 00:05:01.069290 | orchestrator | 2025-03-11 00:05:01.070355 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-11 00:05:01.070823 | orchestrator | Tuesday 11 March 2025 00:05:01 +0000 (0:00:01.448) 0:00:10.759 ********* 2025-03-11 00:05:01.747532 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:05:01.748089 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:05:01.748132 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:01.751440 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:01.753219 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:01.753596 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:05:01.754419 | orchestrator | 2025-03-11 00:05:01.755108 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-11 00:05:01.755451 | orchestrator | Tuesday 11 March 2025 00:05:01 +0000 (0:00:00.690) 0:00:11.450 ********* 2025-03-11 00:05:01.834573 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:05:01.855071 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:05:01.882959 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:05:01.935467 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:01.936738 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:01.937608 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:01.938242 | orchestrator | 2025-03-11 00:05:01.938853 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-11 00:05:01.939470 | orchestrator | Tuesday 11 March 2025 00:05:01 +0000 (0:00:00.188) 0:00:11.638 ********* 2025-03-11 00:05:02.702375 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-11 00:05:02.706126 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:02.708834 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-11 00:05:02.710105 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:05:02.710954 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-11 00:05:02.711875 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:02.712128 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-11 00:05:02.713375 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:02.714353 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-11 00:05:02.714897 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-11 00:05:02.716182 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:05:02.716554 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:05:02.716566 | orchestrator | 2025-03-11 00:05:02.716577 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-11 00:05:02.717344 | orchestrator | Tuesday 11 March 2025 00:05:02 +0000 (0:00:00.761) 0:00:12.399 ********* 2025-03-11 00:05:02.761036 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:05:02.789352 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:05:02.822438 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:05:02.849701 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:02.885251 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:02.885649 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:02.887420 | orchestrator | 2025-03-11 00:05:02.951527 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-11 00:05:02.951579 | orchestrator | Tuesday 11 March 2025 00:05:02 +0000 (0:00:00.190) 0:00:12.589 ********* 2025-03-11 00:05:02.951603 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:05:02.979498 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:05:03.008257 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:05:03.044199 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:03.086061 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:03.087453 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:03.087486 | orchestrator | 2025-03-11 00:05:03.088111 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-11 00:05:03.088800 | orchestrator | Tuesday 11 March 2025 00:05:03 +0000 (0:00:00.198) 0:00:12.788 ********* 2025-03-11 00:05:03.143625 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:05:03.167429 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:05:03.197498 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:05:03.231649 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:03.276238 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:03.277250 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:03.277682 | orchestrator | 2025-03-11 00:05:03.283190 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-11 00:05:04.017626 | orchestrator | Tuesday 11 March 2025 00:05:03 +0000 (0:00:00.190) 0:00:12.979 ********* 2025-03-11 00:05:04.017762 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:05:04.021043 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:05:04.021124 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:05:04.022203 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:04.022749 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:04.023375 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:04.023737 | orchestrator | 2025-03-11 00:05:04.024260 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-11 00:05:04.024686 | orchestrator | Tuesday 11 March 2025 00:05:04 +0000 (0:00:00.743) 0:00:13.722 ********* 2025-03-11 00:05:04.108440 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:05:04.132452 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:05:04.155396 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:05:04.282348 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:04.284302 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:04.285970 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:04.287207 | orchestrator | 2025-03-11 00:05:04.290903 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:05:04.291001 | orchestrator | 2025-03-11 00:05:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:05:04.291883 | orchestrator | 2025-03-11 00:05:04 | INFO  | Please wait and do not abort execution. 2025-03-11 00:05:04.291916 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.292648 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.293845 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.294673 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.294766 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.295911 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 00:05:04.296748 | orchestrator | 2025-03-11 00:05:04.297111 | orchestrator | Tuesday 11 March 2025 00:05:04 +0000 (0:00:00.263) 0:00:13.986 ********* 2025-03-11 00:05:04.297602 | orchestrator | =============================================================================== 2025-03-11 00:05:04.298212 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2025-03-11 00:05:04.298767 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.45s 2025-03-11 00:05:04.299296 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.43s 2025-03-11 00:05:04.299739 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-03-11 00:05:04.301137 | orchestrator | Do not require tty for all users ---------------------------------------- 0.91s 2025-03-11 00:05:04.301673 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-03-11 00:05:04.301703 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2025-03-11 00:05:04.302432 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.74s 2025-03-11 00:05:04.302831 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2025-03-11 00:05:04.303408 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2025-03-11 00:05:04.303859 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-03-11 00:05:04.307743 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-03-11 00:05:04.307837 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-03-11 00:05:04.307859 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2025-03-11 00:05:04.308493 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-03-11 00:05:04.309077 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-03-11 00:05:04.309503 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-03-11 00:05:04.863159 | orchestrator | + osism apply --environment custom facts 2025-03-11 00:05:06.460736 | orchestrator | 2025-03-11 00:05:06 | INFO  | Trying to run play facts in environment custom 2025-03-11 00:05:06.515615 | orchestrator | 2025-03-11 00:05:06 | INFO  | Task 81224b66-0b9f-49fd-88e8-3f722860b823 (facts) was prepared for execution. 2025-03-11 00:05:10.120261 | orchestrator | 2025-03-11 00:05:06 | INFO  | It takes a moment until task 81224b66-0b9f-49fd-88e8-3f722860b823 (facts) has been started and output is visible here. 2025-03-11 00:05:10.120452 | orchestrator | 2025-03-11 00:05:10.120575 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-11 00:05:10.121198 | orchestrator | 2025-03-11 00:05:10.121268 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-11 00:05:10.121565 | orchestrator | Tuesday 11 March 2025 00:05:10 +0000 (0:00:00.100) 0:00:00.100 ********* 2025-03-11 00:05:11.553253 | orchestrator | ok: [testbed-manager] 2025-03-11 00:05:12.701992 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:05:12.702260 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:12.702287 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:05:12.702902 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:05:12.702958 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:12.704645 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:12.706969 | orchestrator | 2025-03-11 00:05:12.707723 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-11 00:05:12.708364 | orchestrator | Tuesday 11 March 2025 00:05:12 +0000 (0:00:02.585) 0:00:02.685 ********* 2025-03-11 00:05:14.142467 | orchestrator | ok: [testbed-manager] 2025-03-11 00:05:15.070603 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:05:15.071194 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:15.071799 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:05:15.072168 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:05:15.073085 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:15.073156 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:15.074268 | orchestrator | 2025-03-11 00:05:15.074451 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-11 00:05:15.075024 | orchestrator | 2025-03-11 00:05:15.075306 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-11 00:05:15.078289 | orchestrator | Tuesday 11 March 2025 00:05:15 +0000 (0:00:02.369) 0:00:05.055 ********* 2025-03-11 00:05:15.168326 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:15.169676 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:15.172226 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:15.173158 | orchestrator | 2025-03-11 00:05:15.174137 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-11 00:05:15.175127 | orchestrator | Tuesday 11 March 2025 00:05:15 +0000 (0:00:00.099) 0:00:05.155 ********* 2025-03-11 00:05:15.303423 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:15.304860 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:15.305586 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:15.306207 | orchestrator | 2025-03-11 00:05:15.306799 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-11 00:05:15.307373 | orchestrator | Tuesday 11 March 2025 00:05:15 +0000 (0:00:00.130) 0:00:05.285 ********* 2025-03-11 00:05:15.425892 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:15.426289 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:15.432002 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:15.432251 | orchestrator | 2025-03-11 00:05:15.432278 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-11 00:05:15.432300 | orchestrator | Tuesday 11 March 2025 00:05:15 +0000 (0:00:00.128) 0:00:05.413 ********* 2025-03-11 00:05:15.584646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 00:05:15.584793 | orchestrator | 2025-03-11 00:05:15.584820 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-11 00:05:15.585291 | orchestrator | Tuesday 11 March 2025 00:05:15 +0000 (0:00:00.153) 0:00:05.566 ********* 2025-03-11 00:05:16.067105 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:16.067259 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:16.067813 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:16.069450 | orchestrator | 2025-03-11 00:05:16.070642 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-11 00:05:16.073357 | orchestrator | Tuesday 11 March 2025 00:05:16 +0000 (0:00:00.486) 0:00:06.054 ********* 2025-03-11 00:05:16.173195 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:16.173279 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:16.173303 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:16.174076 | orchestrator | 2025-03-11 00:05:16.174440 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-11 00:05:16.174540 | orchestrator | Tuesday 11 March 2025 00:05:16 +0000 (0:00:00.106) 0:00:06.160 ********* 2025-03-11 00:05:17.183429 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:17.186667 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:17.186827 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:17.186866 | orchestrator | 2025-03-11 00:05:17.187006 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-11 00:05:17.628052 | orchestrator | Tuesday 11 March 2025 00:05:17 +0000 (0:00:01.008) 0:00:07.168 ********* 2025-03-11 00:05:17.628166 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:17.628834 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:17.629637 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:17.629676 | orchestrator | 2025-03-11 00:05:17.630218 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-11 00:05:17.631174 | orchestrator | Tuesday 11 March 2025 00:05:17 +0000 (0:00:00.446) 0:00:07.614 ********* 2025-03-11 00:05:18.684017 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:18.685461 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:18.686416 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:18.687228 | orchestrator | 2025-03-11 00:05:18.687575 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-11 00:05:18.688636 | orchestrator | Tuesday 11 March 2025 00:05:18 +0000 (0:00:01.051) 0:00:08.666 ********* 2025-03-11 00:05:32.702701 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:32.763560 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:32.763634 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:32.763651 | orchestrator | 2025-03-11 00:05:32.763667 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-11 00:05:32.763682 | orchestrator | Tuesday 11 March 2025 00:05:32 +0000 (0:00:14.013) 0:00:22.679 ********* 2025-03-11 00:05:32.763709 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:05:32.802826 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:05:32.802902 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:05:32.804170 | orchestrator | 2025-03-11 00:05:32.804672 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-11 00:05:32.804880 | orchestrator | Tuesday 11 March 2025 00:05:32 +0000 (0:00:00.109) 0:00:22.789 ********* 2025-03-11 00:05:41.097989 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:05:41.098444 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:05:41.098489 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:05:41.099595 | orchestrator | 2025-03-11 00:05:41.576985 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-11 00:05:41.577093 | orchestrator | Tuesday 11 March 2025 00:05:41 +0000 (0:00:08.291) 0:00:31.081 ********* 2025-03-11 00:05:41.577124 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:45.354474 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:45.354583 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:45.354599 | orchestrator | 2025-03-11 00:05:45.354612 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-11 00:05:45.354625 | orchestrator | Tuesday 11 March 2025 00:05:41 +0000 (0:00:00.479) 0:00:31.560 ********* 2025-03-11 00:05:45.354651 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-11 00:05:45.356018 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-11 00:05:45.356048 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-11 00:05:45.357186 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-11 00:05:45.358258 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-11 00:05:45.361746 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-11 00:05:45.363068 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-11 00:05:45.364401 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-11 00:05:45.365257 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-11 00:05:45.366067 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-11 00:05:45.366900 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-11 00:05:45.367574 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-11 00:05:45.368489 | orchestrator | 2025-03-11 00:05:45.369493 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-11 00:05:45.370550 | orchestrator | Tuesday 11 March 2025 00:05:45 +0000 (0:00:03.772) 0:00:35.333 ********* 2025-03-11 00:05:46.479326 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:46.481518 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:46.481562 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:46.482422 | orchestrator | 2025-03-11 00:05:46.484261 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 00:05:46.485368 | orchestrator | 2025-03-11 00:05:46.486097 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 00:05:46.487052 | orchestrator | Tuesday 11 March 2025 00:05:46 +0000 (0:00:01.130) 0:00:36.463 ********* 2025-03-11 00:05:52.676878 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:05:52.677120 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:05:52.677842 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:05:52.677871 | orchestrator | ok: [testbed-manager] 2025-03-11 00:05:52.678718 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:52.678765 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:52.679192 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:52.679718 | orchestrator | 2025-03-11 00:05:52.680644 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:05:52.680679 | orchestrator | 2025-03-11 00:05:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:05:52.681276 | orchestrator | 2025-03-11 00:05:52 | INFO  | Please wait and do not abort execution. 2025-03-11 00:05:52.681299 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:05:52.681419 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:05:52.683404 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:05:52.683510 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:05:52.684877 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:05:52.685071 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:05:52.685797 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:05:52.686445 | orchestrator | 2025-03-11 00:05:52.688054 | orchestrator | Tuesday 11 March 2025 00:05:52 +0000 (0:00:06.198) 0:00:42.661 ********* 2025-03-11 00:05:52.688886 | orchestrator | =============================================================================== 2025-03-11 00:05:52.689392 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.01s 2025-03-11 00:05:52.689811 | orchestrator | Install required packages (Debian) -------------------------------------- 8.29s 2025-03-11 00:05:52.690360 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.20s 2025-03-11 00:05:52.691157 | orchestrator | Copy fact files --------------------------------------------------------- 3.77s 2025-03-11 00:05:52.691534 | orchestrator | Create custom facts directory ------------------------------------------- 2.59s 2025-03-11 00:05:52.692143 | orchestrator | Copy fact file ---------------------------------------------------------- 2.37s 2025-03-11 00:05:52.692530 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.13s 2025-03-11 00:05:52.693051 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-03-11 00:05:52.693461 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2025-03-11 00:05:52.694146 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-03-11 00:05:52.694688 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2025-03-11 00:05:52.695255 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-03-11 00:05:52.696007 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-03-11 00:05:52.696610 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-03-11 00:05:52.697430 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-03-11 00:05:52.697995 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-03-11 00:05:52.698701 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-03-11 00:05:52.699218 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-03-11 00:05:53.224321 | orchestrator | + osism apply bootstrap 2025-03-11 00:05:54.887717 | orchestrator | 2025-03-11 00:05:54 | INFO  | Task 7c981d43-6025-48ba-b943-18f0d30b4dc5 (bootstrap) was prepared for execution. 2025-03-11 00:05:58.627189 | orchestrator | 2025-03-11 00:05:54 | INFO  | It takes a moment until task 7c981d43-6025-48ba-b943-18f0d30b4dc5 (bootstrap) has been started and output is visible here. 2025-03-11 00:05:58.627326 | orchestrator | 2025-03-11 00:05:58.631028 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-11 00:05:58.731147 | orchestrator | 2025-03-11 00:05:58.731188 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-11 00:05:58.731222 | orchestrator | Tuesday 11 March 2025 00:05:58 +0000 (0:00:00.147) 0:00:00.147 ********* 2025-03-11 00:05:58.731247 | orchestrator | ok: [testbed-manager] 2025-03-11 00:05:58.763836 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:05:58.800459 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:05:58.828887 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:05:58.923852 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:05:58.924728 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:05:58.925740 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:05:58.926864 | orchestrator | 2025-03-11 00:05:58.929438 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 00:05:58.929869 | orchestrator | 2025-03-11 00:05:58.929904 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 00:05:58.930702 | orchestrator | Tuesday 11 March 2025 00:05:58 +0000 (0:00:00.301) 0:00:00.449 ********* 2025-03-11 00:06:02.803277 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:02.803534 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:02.805562 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:02.806433 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:02.807381 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:02.808269 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:02.808455 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:02.808936 | orchestrator | 2025-03-11 00:06:02.810283 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-11 00:06:02.810649 | orchestrator | 2025-03-11 00:06:02.811353 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 00:06:02.812246 | orchestrator | Tuesday 11 March 2025 00:06:02 +0000 (0:00:03.877) 0:00:04.327 ********* 2025-03-11 00:06:02.907365 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-11 00:06:02.932591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-11 00:06:02.933325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-11 00:06:02.986150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-11 00:06:02.986763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-11 00:06:02.986933 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-11 00:06:02.987556 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-11 00:06:03.343220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-11 00:06:03.343967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-11 00:06:03.344585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-11 00:06:03.347529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-11 00:06:03.348929 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-11 00:06:03.348956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-11 00:06:03.348970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-11 00:06:03.348984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-11 00:06:03.349003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-11 00:06:03.349521 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-11 00:06:03.350084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-11 00:06:03.350859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-11 00:06:03.352370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-11 00:06:03.353037 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-11 00:06:03.353819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-11 00:06:03.354584 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-11 00:06:03.355295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-11 00:06:03.356041 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:03.356728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-11 00:06:03.357934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-11 00:06:03.358624 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-11 00:06:03.359268 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-11 00:06:03.359463 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-11 00:06:03.360070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-11 00:06:03.360753 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-11 00:06:03.361250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-11 00:06:03.362120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-11 00:06:03.362546 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:03.363005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-11 00:06:03.363330 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-11 00:06:03.364086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-11 00:06:03.364458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-11 00:06:03.364964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-11 00:06:03.365574 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-11 00:06:03.366649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-11 00:06:03.367573 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:03.368344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-11 00:06:03.368747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-11 00:06:03.369393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-11 00:06:03.370113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-11 00:06:03.370488 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:03.370886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-11 00:06:03.371392 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:03.371672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-11 00:06:03.372099 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-11 00:06:03.373935 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:03.375073 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-11 00:06:03.376050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-11 00:06:03.376743 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:03.377468 | orchestrator | 2025-03-11 00:06:03.378104 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-11 00:06:03.381240 | orchestrator | 2025-03-11 00:06:03.432674 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-11 00:06:03.432708 | orchestrator | Tuesday 11 March 2025 00:06:03 +0000 (0:00:00.539) 0:00:04.866 ********* 2025-03-11 00:06:03.432728 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:03.466206 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:03.503766 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:03.537217 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:03.606140 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:03.610580 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:03.610885 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:03.610935 | orchestrator | 2025-03-11 00:06:03.610954 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-11 00:06:03.610975 | orchestrator | Tuesday 11 March 2025 00:06:03 +0000 (0:00:00.262) 0:00:05.129 ********* 2025-03-11 00:06:04.977304 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:04.979686 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:04.980375 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:04.980970 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:04.982165 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:04.982719 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:04.983564 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:04.984510 | orchestrator | 2025-03-11 00:06:04.984993 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-11 00:06:04.985408 | orchestrator | Tuesday 11 March 2025 00:06:04 +0000 (0:00:01.371) 0:00:06.500 ********* 2025-03-11 00:06:06.345744 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:06.346074 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:06.348450 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:06.348805 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:06.349959 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:06.350407 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:06.352108 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:06.352616 | orchestrator | 2025-03-11 00:06:06.354100 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-11 00:06:06.360514 | orchestrator | Tuesday 11 March 2025 00:06:06 +0000 (0:00:01.367) 0:00:07.868 ********* 2025-03-11 00:06:06.701000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:06.702125 | orchestrator | 2025-03-11 00:06:06.702161 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-11 00:06:06.702720 | orchestrator | Tuesday 11 March 2025 00:06:06 +0000 (0:00:00.353) 0:00:08.222 ********* 2025-03-11 00:06:08.931418 | orchestrator | changed: [testbed-manager] 2025-03-11 00:06:08.931618 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:08.931640 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:08.931655 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:08.931674 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:08.932247 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:08.933430 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:08.934292 | orchestrator | 2025-03-11 00:06:08.935845 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-11 00:06:08.936416 | orchestrator | Tuesday 11 March 2025 00:06:08 +0000 (0:00:02.231) 0:00:10.453 ********* 2025-03-11 00:06:09.023637 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:09.228253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:09.228385 | orchestrator | 2025-03-11 00:06:09.229833 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-11 00:06:09.230557 | orchestrator | Tuesday 11 March 2025 00:06:09 +0000 (0:00:00.299) 0:00:10.752 ********* 2025-03-11 00:06:10.345793 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:10.346158 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:10.346804 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:10.347363 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:10.347850 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:10.348508 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:10.349233 | orchestrator | 2025-03-11 00:06:10.349310 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-11 00:06:10.349997 | orchestrator | Tuesday 11 March 2025 00:06:10 +0000 (0:00:01.115) 0:00:11.868 ********* 2025-03-11 00:06:10.442453 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:11.020832 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:11.021347 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:11.021688 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:11.023260 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:11.024871 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:11.024897 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:11.024937 | orchestrator | 2025-03-11 00:06:11.024958 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-11 00:06:11.147960 | orchestrator | Tuesday 11 March 2025 00:06:11 +0000 (0:00:00.671) 0:00:12.540 ********* 2025-03-11 00:06:11.148034 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:11.174414 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:11.203469 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:11.510580 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:11.510795 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:11.511414 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:11.513313 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:11.513734 | orchestrator | 2025-03-11 00:06:11.513836 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-11 00:06:11.513869 | orchestrator | Tuesday 11 March 2025 00:06:11 +0000 (0:00:00.493) 0:00:13.034 ********* 2025-03-11 00:06:11.590807 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:11.624162 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:11.650757 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:11.680450 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:11.747762 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:11.747852 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:11.750281 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:12.111901 | orchestrator | 2025-03-11 00:06:12.112048 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-11 00:06:12.112065 | orchestrator | Tuesday 11 March 2025 00:06:11 +0000 (0:00:00.238) 0:00:13.272 ********* 2025-03-11 00:06:12.112095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:12.114724 | orchestrator | 2025-03-11 00:06:12.115275 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-11 00:06:12.115438 | orchestrator | Tuesday 11 March 2025 00:06:12 +0000 (0:00:00.357) 0:00:13.629 ********* 2025-03-11 00:06:12.503543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:12.504514 | orchestrator | 2025-03-11 00:06:12.505226 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-11 00:06:12.505260 | orchestrator | Tuesday 11 March 2025 00:06:12 +0000 (0:00:00.396) 0:00:14.026 ********* 2025-03-11 00:06:13.860846 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:13.861431 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:13.863554 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:13.864203 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:13.865737 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:13.866425 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:13.868172 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:13.869050 | orchestrator | 2025-03-11 00:06:13.870089 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-11 00:06:13.870627 | orchestrator | Tuesday 11 March 2025 00:06:13 +0000 (0:00:01.357) 0:00:15.384 ********* 2025-03-11 00:06:13.942222 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:13.980307 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:14.003279 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:14.033591 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:14.099002 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:14.099883 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:14.100971 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:14.101519 | orchestrator | 2025-03-11 00:06:14.104251 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-11 00:06:14.845733 | orchestrator | Tuesday 11 March 2025 00:06:14 +0000 (0:00:00.238) 0:00:15.622 ********* 2025-03-11 00:06:14.845937 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:14.850474 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:14.851433 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:14.852703 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:14.854596 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:14.858316 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:14.859134 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:14.859726 | orchestrator | 2025-03-11 00:06:14.860446 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-11 00:06:14.861161 | orchestrator | Tuesday 11 March 2025 00:06:14 +0000 (0:00:00.745) 0:00:16.368 ********* 2025-03-11 00:06:14.982363 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:15.011621 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:15.054174 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:15.131818 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:15.132619 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:15.134385 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:15.135140 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:15.135937 | orchestrator | 2025-03-11 00:06:15.136563 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-11 00:06:15.137347 | orchestrator | Tuesday 11 March 2025 00:06:15 +0000 (0:00:00.287) 0:00:16.656 ********* 2025-03-11 00:06:15.751750 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:15.752362 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:15.753454 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:15.758429 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:15.758532 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:15.758553 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:15.758573 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:15.758675 | orchestrator | 2025-03-11 00:06:15.761582 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-11 00:06:15.764517 | orchestrator | Tuesday 11 March 2025 00:06:15 +0000 (0:00:00.617) 0:00:17.273 ********* 2025-03-11 00:06:17.034433 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:17.037537 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:17.037583 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:17.039660 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:17.039700 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:17.040851 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:17.041248 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:17.042281 | orchestrator | 2025-03-11 00:06:17.043111 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-11 00:06:17.043635 | orchestrator | Tuesday 11 March 2025 00:06:17 +0000 (0:00:01.276) 0:00:18.550 ********* 2025-03-11 00:06:18.321136 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:18.321609 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:18.321644 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:18.321987 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:18.324643 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:18.324946 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:18.326495 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:18.326869 | orchestrator | 2025-03-11 00:06:18.328849 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-11 00:06:18.329836 | orchestrator | Tuesday 11 March 2025 00:06:18 +0000 (0:00:01.294) 0:00:19.844 ********* 2025-03-11 00:06:18.695279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:18.695565 | orchestrator | 2025-03-11 00:06:18.700822 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-11 00:06:18.701783 | orchestrator | Tuesday 11 March 2025 00:06:18 +0000 (0:00:00.373) 0:00:20.218 ********* 2025-03-11 00:06:18.760417 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:20.448977 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:20.451128 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:20.452210 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:20.458674 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:20.459559 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:20.460454 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:20.461653 | orchestrator | 2025-03-11 00:06:20.462519 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-11 00:06:20.463555 | orchestrator | Tuesday 11 March 2025 00:06:20 +0000 (0:00:01.752) 0:00:21.971 ********* 2025-03-11 00:06:20.530669 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:20.563838 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:20.598011 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:20.632376 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:20.701368 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:20.701957 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:20.702211 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:20.702699 | orchestrator | 2025-03-11 00:06:20.703976 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-11 00:06:20.704842 | orchestrator | Tuesday 11 March 2025 00:06:20 +0000 (0:00:00.254) 0:00:22.226 ********* 2025-03-11 00:06:20.780837 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:20.816187 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:20.846278 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:20.875506 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:20.960658 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:20.961134 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:20.962129 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:20.963381 | orchestrator | 2025-03-11 00:06:20.963933 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-11 00:06:20.964697 | orchestrator | Tuesday 11 March 2025 00:06:20 +0000 (0:00:00.258) 0:00:22.484 ********* 2025-03-11 00:06:21.068484 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:21.104279 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:21.134764 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:21.171154 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:21.245583 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:21.246298 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:21.246336 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:21.246869 | orchestrator | 2025-03-11 00:06:21.247510 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-11 00:06:21.248148 | orchestrator | Tuesday 11 March 2025 00:06:21 +0000 (0:00:00.285) 0:00:22.769 ********* 2025-03-11 00:06:21.568665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:21.572525 | orchestrator | 2025-03-11 00:06:21.573168 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-11 00:06:21.574176 | orchestrator | Tuesday 11 March 2025 00:06:21 +0000 (0:00:00.318) 0:00:23.088 ********* 2025-03-11 00:06:22.168588 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:22.168754 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:22.168773 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:22.168793 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:22.169284 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:22.169350 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:22.169620 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:22.170120 | orchestrator | 2025-03-11 00:06:22.170221 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-11 00:06:22.170596 | orchestrator | Tuesday 11 March 2025 00:06:22 +0000 (0:00:00.604) 0:00:23.692 ********* 2025-03-11 00:06:22.259561 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:22.296754 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:22.330191 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:22.373691 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:22.445823 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:22.446382 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:22.446579 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:22.446992 | orchestrator | 2025-03-11 00:06:22.447244 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-11 00:06:22.447550 | orchestrator | Tuesday 11 March 2025 00:06:22 +0000 (0:00:00.277) 0:00:23.970 ********* 2025-03-11 00:06:23.577328 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:23.577690 | orchestrator | changed: [testbed-manager] 2025-03-11 00:06:23.579015 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:23.580696 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:23.581850 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:23.583000 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:23.584273 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:23.585028 | orchestrator | 2025-03-11 00:06:23.585803 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-11 00:06:23.586652 | orchestrator | Tuesday 11 March 2025 00:06:23 +0000 (0:00:01.122) 0:00:25.092 ********* 2025-03-11 00:06:24.156732 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:24.157447 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:24.157991 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:24.159325 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:24.159944 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:24.160430 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:24.160995 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:24.161523 | orchestrator | 2025-03-11 00:06:24.162129 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-11 00:06:24.162797 | orchestrator | Tuesday 11 March 2025 00:06:24 +0000 (0:00:00.588) 0:00:25.680 ********* 2025-03-11 00:06:25.412462 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:25.415331 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:25.415409 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:25.415475 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:25.416065 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:25.416092 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:25.416145 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:25.416165 | orchestrator | 2025-03-11 00:06:25.416559 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-11 00:06:25.417667 | orchestrator | Tuesday 11 March 2025 00:06:25 +0000 (0:00:01.253) 0:00:26.934 ********* 2025-03-11 00:06:39.013606 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:39.013862 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:39.013894 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:39.014085 | orchestrator | changed: [testbed-manager] 2025-03-11 00:06:39.014107 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:39.014122 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:39.014475 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:39.014815 | orchestrator | 2025-03-11 00:06:39.014893 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-11 00:06:39.014952 | orchestrator | Tuesday 11 March 2025 00:06:39 +0000 (0:00:13.596) 0:00:40.530 ********* 2025-03-11 00:06:39.099461 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:39.133596 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:39.158067 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:39.193163 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:39.289313 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:39.290873 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:39.291985 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:39.296224 | orchestrator | 2025-03-11 00:06:39.297317 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-11 00:06:39.297735 | orchestrator | Tuesday 11 March 2025 00:06:39 +0000 (0:00:00.281) 0:00:40.812 ********* 2025-03-11 00:06:39.402853 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:39.440329 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:39.481464 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:39.521634 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:39.609653 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:39.610059 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:39.611202 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:39.612080 | orchestrator | 2025-03-11 00:06:39.612584 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-11 00:06:39.613335 | orchestrator | Tuesday 11 March 2025 00:06:39 +0000 (0:00:00.316) 0:00:41.128 ********* 2025-03-11 00:06:39.701923 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:39.744572 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:39.772498 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:39.812938 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:39.889535 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:39.892673 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:39.894486 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:39.895788 | orchestrator | 2025-03-11 00:06:39.896635 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-11 00:06:39.897860 | orchestrator | Tuesday 11 March 2025 00:06:39 +0000 (0:00:00.280) 0:00:41.409 ********* 2025-03-11 00:06:40.299195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:40.301234 | orchestrator | 2025-03-11 00:06:40.303054 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-11 00:06:40.303089 | orchestrator | Tuesday 11 March 2025 00:06:40 +0000 (0:00:00.411) 0:00:41.821 ********* 2025-03-11 00:06:42.086572 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:42.087847 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:42.087883 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:42.087946 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:42.090397 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:42.091237 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:42.092978 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:42.094109 | orchestrator | 2025-03-11 00:06:42.095352 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-11 00:06:42.095821 | orchestrator | Tuesday 11 March 2025 00:06:42 +0000 (0:00:01.784) 0:00:43.605 ********* 2025-03-11 00:06:43.326087 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:43.326839 | orchestrator | changed: [testbed-manager] 2025-03-11 00:06:43.327092 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:43.327827 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:43.328372 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:43.329805 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:43.330114 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:43.330538 | orchestrator | 2025-03-11 00:06:43.332681 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-11 00:06:43.334008 | orchestrator | Tuesday 11 March 2025 00:06:43 +0000 (0:00:01.242) 0:00:44.848 ********* 2025-03-11 00:06:44.239870 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:06:44.241239 | orchestrator | ok: [testbed-manager] 2025-03-11 00:06:44.242193 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:06:44.243487 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:06:44.244843 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:06:44.245769 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:06:44.246637 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:06:44.247290 | orchestrator | 2025-03-11 00:06:44.247971 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-11 00:06:44.249189 | orchestrator | Tuesday 11 March 2025 00:06:44 +0000 (0:00:00.912) 0:00:45.761 ********* 2025-03-11 00:06:44.651502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:06:44.652251 | orchestrator | 2025-03-11 00:06:44.652582 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-11 00:06:44.653262 | orchestrator | Tuesday 11 March 2025 00:06:44 +0000 (0:00:00.414) 0:00:46.175 ********* 2025-03-11 00:06:45.870227 | orchestrator | changed: [testbed-manager] 2025-03-11 00:06:45.871620 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:06:45.872663 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:06:45.873991 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:06:45.875413 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:06:45.875636 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:06:45.876218 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:06:45.876670 | orchestrator | 2025-03-11 00:06:45.878259 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-11 00:06:45.879749 | orchestrator | Tuesday 11 March 2025 00:06:45 +0000 (0:00:01.216) 0:00:47.391 ********* 2025-03-11 00:06:45.974921 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:06:46.008696 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:06:46.036986 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:06:46.067475 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:06:46.263506 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:06:46.266088 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:06:46.267260 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:06:46.268457 | orchestrator | 2025-03-11 00:06:46.271099 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-11 00:06:46.272338 | orchestrator | Tuesday 11 March 2025 00:06:46 +0000 (0:00:00.393) 0:00:47.784 ********* 2025-03-11 00:07:01.783929 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:07:01.784143 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:07:01.784165 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:07:01.784179 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:07:01.784192 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:07:01.784211 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:07:01.784725 | orchestrator | changed: [testbed-manager] 2025-03-11 00:07:01.785173 | orchestrator | 2025-03-11 00:07:01.785497 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-11 00:07:01.785826 | orchestrator | Tuesday 11 March 2025 00:07:01 +0000 (0:00:15.516) 0:01:03.301 ********* 2025-03-11 00:07:02.677454 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:02.677827 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:02.678653 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:02.679791 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:02.682350 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:02.683530 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:02.683551 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:02.683564 | orchestrator | 2025-03-11 00:07:02.683583 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-11 00:07:02.683877 | orchestrator | Tuesday 11 March 2025 00:07:02 +0000 (0:00:00.899) 0:01:04.200 ********* 2025-03-11 00:07:03.679208 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:03.679387 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:03.680353 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:03.681860 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:03.682153 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:03.682931 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:03.684157 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:03.684631 | orchestrator | 2025-03-11 00:07:03.685280 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-11 00:07:03.687431 | orchestrator | Tuesday 11 March 2025 00:07:03 +0000 (0:00:01.002) 0:01:05.203 ********* 2025-03-11 00:07:03.766322 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:03.798611 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:03.830856 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:03.864362 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:03.946311 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:03.950127 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:03.950959 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:03.950988 | orchestrator | 2025-03-11 00:07:03.951777 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-11 00:07:03.952457 | orchestrator | Tuesday 11 March 2025 00:07:03 +0000 (0:00:00.266) 0:01:05.469 ********* 2025-03-11 00:07:04.032872 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:04.065621 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:04.095990 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:04.129916 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:04.192629 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:04.193075 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:04.193339 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:04.193721 | orchestrator | 2025-03-11 00:07:04.194108 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-11 00:07:04.194820 | orchestrator | Tuesday 11 March 2025 00:07:04 +0000 (0:00:00.247) 0:01:05.717 ********* 2025-03-11 00:07:04.562866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:07:04.563296 | orchestrator | 2025-03-11 00:07:04.563324 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-11 00:07:04.564210 | orchestrator | Tuesday 11 March 2025 00:07:04 +0000 (0:00:00.366) 0:01:06.083 ********* 2025-03-11 00:07:06.574810 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:06.575623 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:06.575657 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:06.575715 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:06.576163 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:06.576393 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:06.576781 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:06.577313 | orchestrator | 2025-03-11 00:07:06.578225 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-11 00:07:07.211329 | orchestrator | Tuesday 11 March 2025 00:07:06 +0000 (0:00:02.012) 0:01:08.096 ********* 2025-03-11 00:07:07.211450 | orchestrator | changed: [testbed-manager] 2025-03-11 00:07:07.211866 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:07:07.211929 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:07:07.211943 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:07:07.211956 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:07:07.211969 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:07:07.211987 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:07:07.212042 | orchestrator | 2025-03-11 00:07:07.212261 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-11 00:07:07.212707 | orchestrator | Tuesday 11 March 2025 00:07:07 +0000 (0:00:00.635) 0:01:08.732 ********* 2025-03-11 00:07:07.298409 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:07.335403 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:07.361927 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:07.404774 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:07.472973 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:07.473368 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:07.473506 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:07.474068 | orchestrator | 2025-03-11 00:07:07.474706 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-11 00:07:07.474771 | orchestrator | Tuesday 11 March 2025 00:07:07 +0000 (0:00:00.264) 0:01:08.997 ********* 2025-03-11 00:07:08.705197 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:08.705375 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:08.705692 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:08.706388 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:08.706858 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:08.707404 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:08.708043 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:08.708796 | orchestrator | 2025-03-11 00:07:08.709162 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-11 00:07:08.709612 | orchestrator | Tuesday 11 March 2025 00:07:08 +0000 (0:00:01.230) 0:01:10.228 ********* 2025-03-11 00:07:10.382509 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:07:10.383117 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:07:10.385084 | orchestrator | changed: [testbed-manager] 2025-03-11 00:07:10.386102 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:07:10.387348 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:07:10.388402 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:07:10.389345 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:07:10.390956 | orchestrator | 2025-03-11 00:07:10.392207 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-11 00:07:10.392835 | orchestrator | Tuesday 11 March 2025 00:07:10 +0000 (0:00:01.675) 0:01:11.904 ********* 2025-03-11 00:07:12.856318 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:12.857100 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:12.857142 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:12.857608 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:12.858435 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:12.860416 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:12.861233 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:12.861419 | orchestrator | 2025-03-11 00:07:12.861991 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-11 00:07:12.862574 | orchestrator | Tuesday 11 March 2025 00:07:12 +0000 (0:00:02.473) 0:01:14.378 ********* 2025-03-11 00:07:49.479065 | orchestrator | ok: [testbed-manager] 2025-03-11 00:07:49.479274 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:07:49.479299 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:07:49.479314 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:07:49.479328 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:07:49.479342 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:07:49.479404 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:07:49.479525 | orchestrator | 2025-03-11 00:07:49.480215 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-11 00:07:49.480651 | orchestrator | Tuesday 11 March 2025 00:07:49 +0000 (0:00:36.601) 0:01:50.979 ********* 2025-03-11 00:09:23.826009 | orchestrator | changed: [testbed-manager] 2025-03-11 00:09:23.826243 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:09:23.826268 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:09:23.826283 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:09:23.826298 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:09:23.826312 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:09:23.826351 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:09:23.826820 | orchestrator | 2025-03-11 00:09:23.826986 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-11 00:09:23.827015 | orchestrator | Tuesday 11 March 2025 00:09:23 +0000 (0:01:34.362) 0:03:25.342 ********* 2025-03-11 00:09:25.627006 | orchestrator | changed: [testbed-manager] 2025-03-11 00:09:25.627926 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:09:25.628501 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:09:25.628532 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:09:25.628799 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:09:25.628826 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:09:25.629281 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:09:25.629762 | orchestrator | 2025-03-11 00:09:25.630351 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-11 00:09:25.630853 | orchestrator | Tuesday 11 March 2025 00:09:25 +0000 (0:00:01.805) 0:03:27.148 ********* 2025-03-11 00:09:39.485292 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:09:39.485704 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:09:39.485745 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:09:39.486686 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:09:39.486711 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:09:39.486726 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:09:39.486745 | orchestrator | changed: [testbed-manager] 2025-03-11 00:09:39.488582 | orchestrator | 2025-03-11 00:09:39.489189 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-11 00:09:39.489989 | orchestrator | Tuesday 11 March 2025 00:09:39 +0000 (0:00:13.855) 0:03:41.003 ********* 2025-03-11 00:09:39.877245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-11 00:09:39.878793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-11 00:09:39.879561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-11 00:09:39.884064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-11 00:09:39.884844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-11 00:09:39.886141 | orchestrator | 2025-03-11 00:09:39.887062 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-11 00:09:39.887967 | orchestrator | Tuesday 11 March 2025 00:09:39 +0000 (0:00:00.397) 0:03:41.400 ********* 2025-03-11 00:09:39.936881 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-11 00:09:39.969881 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-11 00:09:39.970106 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:40.006208 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:09:40.006310 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-11 00:09:40.006766 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-11 00:09:40.035964 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:09:40.064223 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:09:40.682202 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-11 00:09:40.683205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-11 00:09:40.683243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-11 00:09:40.683302 | orchestrator | 2025-03-11 00:09:40.684113 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-11 00:09:40.684910 | orchestrator | Tuesday 11 March 2025 00:09:40 +0000 (0:00:00.803) 0:03:42.204 ********* 2025-03-11 00:09:40.770548 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-11 00:09:40.770670 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-11 00:09:40.770948 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-11 00:09:40.771325 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-11 00:09:40.772114 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-11 00:09:40.772766 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-11 00:09:40.772792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-11 00:09:40.773327 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-11 00:09:40.776304 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-11 00:09:40.776334 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-11 00:09:40.826603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-11 00:09:40.826647 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-11 00:09:40.826663 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-11 00:09:40.826677 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-11 00:09:40.826691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-11 00:09:40.826712 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-11 00:09:40.827595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-11 00:09:40.828342 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-11 00:09:40.828736 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:40.829740 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-11 00:09:40.830940 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-11 00:09:40.831500 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-11 00:09:40.831957 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-11 00:09:40.832388 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-11 00:09:40.832999 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-11 00:09:40.833490 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-11 00:09:40.834130 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-11 00:09:40.868117 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:09:40.868636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-11 00:09:40.869269 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-11 00:09:40.869612 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-11 00:09:40.870131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-11 00:09:40.872203 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-11 00:09:40.914083 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-11 00:09:40.914549 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:09:40.914634 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-11 00:09:40.915205 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-11 00:09:40.915648 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-11 00:09:40.916554 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-11 00:09:40.916883 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-11 00:09:40.916911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-11 00:09:40.949277 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-11 00:09:40.949377 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-11 00:09:44.742446 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:09:44.742645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-11 00:09:44.743147 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-11 00:09:44.744616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-11 00:09:44.746436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-11 00:09:44.747561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-11 00:09:44.748443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-11 00:09:44.749998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-11 00:09:44.750277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-11 00:09:44.751414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-11 00:09:44.752548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-11 00:09:44.753680 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-11 00:09:44.754401 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-11 00:09:44.754997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-11 00:09:44.756021 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-11 00:09:44.756627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-11 00:09:44.757238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-11 00:09:44.758097 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-11 00:09:44.759174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-11 00:09:44.759382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-11 00:09:44.759869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-11 00:09:44.760639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-11 00:09:44.761113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-11 00:09:44.761985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-11 00:09:44.762758 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-11 00:09:44.763346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-11 00:09:44.763875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-11 00:09:44.764534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-11 00:09:44.764958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-11 00:09:44.765472 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-11 00:09:44.766268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-11 00:09:44.766683 | orchestrator | 2025-03-11 00:09:44.767100 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-11 00:09:44.767388 | orchestrator | Tuesday 11 March 2025 00:09:44 +0000 (0:00:04.059) 0:03:46.263 ********* 2025-03-11 00:09:46.264675 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.269073 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.273337 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.273362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.273383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.274601 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.274623 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-11 00:09:46.274637 | orchestrator | 2025-03-11 00:09:46.274657 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-11 00:09:46.275578 | orchestrator | Tuesday 11 March 2025 00:09:46 +0000 (0:00:01.519) 0:03:47.782 ********* 2025-03-11 00:09:46.319121 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-11 00:09:46.345246 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:46.436284 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-11 00:09:46.774703 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:09:46.775051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-11 00:09:46.776660 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:09:46.776688 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-11 00:09:46.776962 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:09:46.776985 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-11 00:09:46.777004 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-11 00:09:46.778206 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-11 00:09:46.779007 | orchestrator | 2025-03-11 00:09:46.779034 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-11 00:09:46.849643 | orchestrator | Tuesday 11 March 2025 00:09:46 +0000 (0:00:00.514) 0:03:48.297 ********* 2025-03-11 00:09:46.849677 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-11 00:09:46.881684 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:46.978147 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-11 00:09:46.979715 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-11 00:09:48.414946 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:09:48.415095 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:09:48.415707 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-11 00:09:48.416331 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:09:48.417785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-11 00:09:48.418097 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-11 00:09:48.418130 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-11 00:09:48.418813 | orchestrator | 2025-03-11 00:09:48.419068 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-11 00:09:48.419475 | orchestrator | Tuesday 11 March 2025 00:09:48 +0000 (0:00:01.639) 0:03:49.936 ********* 2025-03-11 00:09:48.475952 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:48.507992 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:09:48.534154 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:09:48.562009 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:09:48.588720 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:09:48.747389 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:09:48.748055 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:09:48.748268 | orchestrator | 2025-03-11 00:09:48.749384 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-11 00:09:48.749770 | orchestrator | Tuesday 11 March 2025 00:09:48 +0000 (0:00:00.334) 0:03:50.271 ********* 2025-03-11 00:09:55.040288 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:09:55.040502 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:09:55.040525 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:09:55.040540 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:09:55.040561 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:09:55.041010 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:09:55.041327 | orchestrator | ok: [testbed-manager] 2025-03-11 00:09:55.042327 | orchestrator | 2025-03-11 00:09:55.045216 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-11 00:09:55.082284 | orchestrator | Tuesday 11 March 2025 00:09:55 +0000 (0:00:06.288) 0:03:56.559 ********* 2025-03-11 00:09:55.082404 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-11 00:09:55.116600 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-11 00:09:55.154941 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:09:55.196472 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:09:55.196552 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-11 00:09:55.244035 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-11 00:09:55.283341 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:09:55.283409 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-11 00:09:55.283435 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:09:55.283490 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-11 00:09:55.358266 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:09:55.358409 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:09:55.358656 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-11 00:09:55.359675 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:09:55.359761 | orchestrator | 2025-03-11 00:09:55.360333 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-11 00:09:55.363083 | orchestrator | Tuesday 11 March 2025 00:09:55 +0000 (0:00:00.324) 0:03:56.883 ********* 2025-03-11 00:09:56.531555 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-11 00:09:56.531739 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-11 00:09:56.531767 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-11 00:09:56.532464 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-11 00:09:56.532783 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-11 00:09:56.533535 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-11 00:09:56.533642 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-11 00:09:56.534003 | orchestrator | 2025-03-11 00:09:56.536110 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-11 00:09:56.536514 | orchestrator | Tuesday 11 March 2025 00:09:56 +0000 (0:00:01.169) 0:03:58.052 ********* 2025-03-11 00:09:57.154355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:09:57.154528 | orchestrator | 2025-03-11 00:09:57.154555 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-11 00:09:57.155023 | orchestrator | Tuesday 11 March 2025 00:09:57 +0000 (0:00:00.623) 0:03:58.676 ********* 2025-03-11 00:09:58.503992 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:09:58.504431 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:09:58.504867 | orchestrator | ok: [testbed-manager] 2025-03-11 00:09:58.505554 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:09:58.505582 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:09:58.506084 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:09:58.506333 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:09:58.506640 | orchestrator | 2025-03-11 00:09:58.507398 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-11 00:09:59.218425 | orchestrator | Tuesday 11 March 2025 00:09:58 +0000 (0:00:01.350) 0:04:00.026 ********* 2025-03-11 00:09:59.218580 | orchestrator | ok: [testbed-manager] 2025-03-11 00:09:59.218779 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:09:59.220877 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:09:59.221539 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:09:59.222488 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:09:59.223661 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:09:59.224944 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:09:59.225845 | orchestrator | 2025-03-11 00:09:59.227382 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-11 00:09:59.228322 | orchestrator | Tuesday 11 March 2025 00:09:59 +0000 (0:00:00.713) 0:04:00.740 ********* 2025-03-11 00:09:59.864322 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:09:59.866353 | orchestrator | changed: [testbed-manager] 2025-03-11 00:09:59.866854 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:09:59.866884 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:09:59.868306 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:09:59.869064 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:09:59.870090 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:09:59.871232 | orchestrator | 2025-03-11 00:09:59.872196 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-11 00:09:59.872743 | orchestrator | Tuesday 11 March 2025 00:09:59 +0000 (0:00:00.644) 0:04:01.384 ********* 2025-03-11 00:10:00.500495 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:00.500705 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:00.501456 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:00.501487 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:00.501915 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:00.502998 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:00.504334 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:00.504953 | orchestrator | 2025-03-11 00:10:00.505603 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-11 00:10:00.506478 | orchestrator | Tuesday 11 March 2025 00:10:00 +0000 (0:00:00.639) 0:04:02.023 ********* 2025-03-11 00:10:01.521799 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649940.9931476, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.522670 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649938.5364187, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.524320 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649931.2007208, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.526598 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649937.0362551, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.527097 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649923.7800915, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.528259 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649938.557604, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.529129 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741649935.1736362, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.530331 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649871.3989494, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.530848 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649873.4079573, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.531578 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649863.6010602, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.532011 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649946.6869586, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.533066 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649866.4279191, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.533665 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649868.5334537, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.534550 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741649865.5462945, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-11 00:10:01.534890 | orchestrator | 2025-03-11 00:10:01.535269 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-11 00:10:01.535999 | orchestrator | Tuesday 11 March 2025 00:10:01 +0000 (0:00:01.019) 0:04:03.043 ********* 2025-03-11 00:10:02.693782 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:02.693988 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:02.694013 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:02.694083 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:02.694444 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:02.694596 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:02.694783 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:02.695254 | orchestrator | 2025-03-11 00:10:02.695550 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-11 00:10:02.695797 | orchestrator | Tuesday 11 March 2025 00:10:02 +0000 (0:00:01.171) 0:04:04.215 ********* 2025-03-11 00:10:03.912861 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:03.913034 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:03.914340 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:03.917006 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:03.919586 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:03.920396 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:03.921659 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:03.922593 | orchestrator | 2025-03-11 00:10:03.924869 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-11 00:10:03.925448 | orchestrator | Tuesday 11 March 2025 00:10:03 +0000 (0:00:01.218) 0:04:05.434 ********* 2025-03-11 00:10:03.988118 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:10:04.025570 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:10:04.067383 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:10:04.100099 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:10:04.139200 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:10:04.213744 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:10:04.214999 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:10:04.216029 | orchestrator | 2025-03-11 00:10:04.217423 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-11 00:10:04.217921 | orchestrator | Tuesday 11 March 2025 00:10:04 +0000 (0:00:00.303) 0:04:05.737 ********* 2025-03-11 00:10:05.087005 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:05.087223 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:05.087849 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:05.087881 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:05.090372 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:05.091139 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:05.091162 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:05.091181 | orchestrator | 2025-03-11 00:10:05.091304 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-11 00:10:05.091330 | orchestrator | Tuesday 11 March 2025 00:10:05 +0000 (0:00:00.872) 0:04:06.609 ********* 2025-03-11 00:10:05.537942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:10:05.539629 | orchestrator | 2025-03-11 00:10:05.539742 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-11 00:10:05.543249 | orchestrator | Tuesday 11 March 2025 00:10:05 +0000 (0:00:00.451) 0:04:07.061 ********* 2025-03-11 00:10:13.680680 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:13.680897 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:13.680922 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:13.680935 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:13.680954 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:13.682488 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:13.684607 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:13.685217 | orchestrator | 2025-03-11 00:10:13.685240 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-11 00:10:13.685767 | orchestrator | Tuesday 11 March 2025 00:10:13 +0000 (0:00:08.141) 0:04:15.203 ********* 2025-03-11 00:10:14.779852 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:14.780024 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:14.780655 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:14.781280 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:14.782181 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:14.783202 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:14.783984 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:14.784335 | orchestrator | 2025-03-11 00:10:14.785166 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-11 00:10:14.785590 | orchestrator | Tuesday 11 March 2025 00:10:14 +0000 (0:00:01.099) 0:04:16.302 ********* 2025-03-11 00:10:15.793423 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:15.793775 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:15.794197 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:15.795156 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:15.796237 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:15.796851 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:15.796889 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:15.798104 | orchestrator | 2025-03-11 00:10:15.798145 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-11 00:10:15.798452 | orchestrator | Tuesday 11 March 2025 00:10:15 +0000 (0:00:01.013) 0:04:17.316 ********* 2025-03-11 00:10:16.279605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:10:16.279882 | orchestrator | 2025-03-11 00:10:16.280607 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-11 00:10:16.280996 | orchestrator | Tuesday 11 March 2025 00:10:16 +0000 (0:00:00.486) 0:04:17.803 ********* 2025-03-11 00:10:25.397096 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:25.397279 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:25.397311 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:25.398074 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:25.399118 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:25.399933 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:25.400484 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:25.401347 | orchestrator | 2025-03-11 00:10:25.401944 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-11 00:10:25.402317 | orchestrator | Tuesday 11 March 2025 00:10:25 +0000 (0:00:09.101) 0:04:26.904 ********* 2025-03-11 00:10:26.064950 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:26.065104 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:26.065547 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:26.066978 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:26.067664 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:26.068608 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:26.069074 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:26.069754 | orchestrator | 2025-03-11 00:10:26.070542 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-11 00:10:26.070991 | orchestrator | Tuesday 11 March 2025 00:10:26 +0000 (0:00:00.683) 0:04:27.588 ********* 2025-03-11 00:10:27.214707 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:27.216501 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:27.217797 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:27.217844 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:27.217864 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:27.218644 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:27.219461 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:27.220111 | orchestrator | 2025-03-11 00:10:27.221002 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-11 00:10:27.221482 | orchestrator | Tuesday 11 March 2025 00:10:27 +0000 (0:00:01.148) 0:04:28.736 ********* 2025-03-11 00:10:28.331434 | orchestrator | changed: [testbed-manager] 2025-03-11 00:10:28.331765 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:10:28.332715 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:10:28.333531 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:10:28.334621 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:10:28.334923 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:10:28.335299 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:10:28.335747 | orchestrator | 2025-03-11 00:10:28.337015 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-11 00:10:28.337225 | orchestrator | Tuesday 11 March 2025 00:10:28 +0000 (0:00:01.117) 0:04:29.853 ********* 2025-03-11 00:10:28.423674 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:28.509632 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:28.560500 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:28.612496 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:28.705090 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:28.706184 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:28.706336 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:28.707633 | orchestrator | 2025-03-11 00:10:28.708333 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-11 00:10:28.708532 | orchestrator | Tuesday 11 March 2025 00:10:28 +0000 (0:00:00.375) 0:04:30.228 ********* 2025-03-11 00:10:28.834429 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:28.871686 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:28.913067 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:28.994146 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:29.092085 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:29.092372 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:29.092509 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:29.093239 | orchestrator | 2025-03-11 00:10:29.093472 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-11 00:10:29.094304 | orchestrator | Tuesday 11 March 2025 00:10:29 +0000 (0:00:00.386) 0:04:30.615 ********* 2025-03-11 00:10:29.231020 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:29.289104 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:29.327605 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:29.376183 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:29.441351 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:29.442771 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:29.443331 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:29.444077 | orchestrator | 2025-03-11 00:10:29.445203 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-11 00:10:35.343054 | orchestrator | Tuesday 11 March 2025 00:10:29 +0000 (0:00:00.351) 0:04:30.966 ********* 2025-03-11 00:10:35.343206 | orchestrator | ok: [testbed-manager] 2025-03-11 00:10:35.343282 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:10:35.343306 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:10:35.343726 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:10:35.344191 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:10:35.344796 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:10:35.345261 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:10:35.347942 | orchestrator | 2025-03-11 00:10:35.348594 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-11 00:10:35.349168 | orchestrator | Tuesday 11 March 2025 00:10:35 +0000 (0:00:05.898) 0:04:36.865 ********* 2025-03-11 00:10:35.799938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:10:35.800608 | orchestrator | 2025-03-11 00:10:35.801710 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-11 00:10:35.802891 | orchestrator | Tuesday 11 March 2025 00:10:35 +0000 (0:00:00.457) 0:04:37.323 ********* 2025-03-11 00:10:35.890286 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-11 00:10:35.891243 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-11 00:10:35.939900 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:10:35.941060 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-11 00:10:35.941200 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-11 00:10:35.942077 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-11 00:10:35.998232 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-11 00:10:35.998347 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:10:36.000297 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-11 00:10:36.050413 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-11 00:10:36.050463 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:10:36.095540 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-11 00:10:36.095590 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-11 00:10:36.095991 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:10:36.097141 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-11 00:10:36.182703 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:10:36.183211 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-11 00:10:36.183464 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:10:36.184006 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-11 00:10:36.184244 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-11 00:10:36.184639 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:10:36.185088 | orchestrator | 2025-03-11 00:10:36.185240 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-11 00:10:36.185495 | orchestrator | Tuesday 11 March 2025 00:10:36 +0000 (0:00:00.384) 0:04:37.707 ********* 2025-03-11 00:10:36.705947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:10:36.706400 | orchestrator | 2025-03-11 00:10:36.706680 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-11 00:10:36.707433 | orchestrator | Tuesday 11 March 2025 00:10:36 +0000 (0:00:00.522) 0:04:38.230 ********* 2025-03-11 00:10:36.757121 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-11 00:10:36.796488 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:10:36.854060 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-11 00:10:36.854095 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:10:36.854404 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-11 00:10:36.895478 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:10:36.896028 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-11 00:10:36.938638 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:10:36.939405 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-11 00:10:36.992463 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:10:36.992887 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-11 00:10:37.091692 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:10:37.092310 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-11 00:10:37.093320 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:10:37.094460 | orchestrator | 2025-03-11 00:10:37.095018 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-11 00:10:37.096058 | orchestrator | Tuesday 11 March 2025 00:10:37 +0000 (0:00:00.383) 0:04:38.614 ********* 2025-03-11 00:10:37.585343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:10:37.585941 | orchestrator | 2025-03-11 00:10:37.586491 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-11 00:10:37.587066 | orchestrator | Tuesday 11 March 2025 00:10:37 +0000 (0:00:00.495) 0:04:39.109 ********* 2025-03-11 00:11:11.670933 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:11.671206 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:11.671235 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:11.671257 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:11.672816 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:11.673855 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:11.674886 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:11.675957 | orchestrator | 2025-03-11 00:11:11.676881 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-11 00:11:11.677608 | orchestrator | Tuesday 11 March 2025 00:11:11 +0000 (0:00:34.080) 0:05:13.190 ********* 2025-03-11 00:11:18.895886 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:18.896110 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:18.896133 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:18.896154 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:18.896857 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:18.896889 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:18.897361 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:18.898502 | orchestrator | 2025-03-11 00:11:18.900925 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-11 00:11:18.900971 | orchestrator | Tuesday 11 March 2025 00:11:18 +0000 (0:00:07.226) 0:05:20.416 ********* 2025-03-11 00:11:26.591862 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:26.592094 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:26.592123 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:26.592145 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:26.592324 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:26.592356 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:26.592907 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:26.593168 | orchestrator | 2025-03-11 00:11:26.594404 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-11 00:11:28.468537 | orchestrator | Tuesday 11 March 2025 00:11:26 +0000 (0:00:07.697) 0:05:28.114 ********* 2025-03-11 00:11:28.468853 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:28.468954 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:28.469125 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:28.469156 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:28.469585 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:28.470597 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:28.472322 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:28.473340 | orchestrator | 2025-03-11 00:11:28.474191 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-11 00:11:28.474954 | orchestrator | Tuesday 11 March 2025 00:11:28 +0000 (0:00:01.874) 0:05:29.988 ********* 2025-03-11 00:11:34.469388 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:34.470269 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:34.470290 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:34.470591 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:34.473640 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:34.477068 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:34.477953 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:34.478760 | orchestrator | 2025-03-11 00:11:34.479831 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-11 00:11:34.480440 | orchestrator | Tuesday 11 March 2025 00:11:34 +0000 (0:00:05.997) 0:05:35.986 ********* 2025-03-11 00:11:35.019099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:11:35.019219 | orchestrator | 2025-03-11 00:11:35.019708 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-11 00:11:35.020227 | orchestrator | Tuesday 11 March 2025 00:11:35 +0000 (0:00:00.555) 0:05:36.542 ********* 2025-03-11 00:11:35.807826 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:35.812314 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:35.813223 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:35.814411 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:35.815168 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:35.816747 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:35.819093 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:35.819859 | orchestrator | 2025-03-11 00:11:35.821068 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-11 00:11:35.824098 | orchestrator | Tuesday 11 March 2025 00:11:35 +0000 (0:00:00.786) 0:05:37.328 ********* 2025-03-11 00:11:37.396019 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:37.396397 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:37.396433 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:37.396455 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:37.397169 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:37.397569 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:37.397606 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:37.398159 | orchestrator | 2025-03-11 00:11:37.398421 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-11 00:11:37.398836 | orchestrator | Tuesday 11 March 2025 00:11:37 +0000 (0:00:01.587) 0:05:38.916 ********* 2025-03-11 00:11:38.252893 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:38.258918 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:38.260312 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:38.260907 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:38.261842 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:38.262264 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:38.263056 | orchestrator | changed: [testbed-manager] 2025-03-11 00:11:38.263757 | orchestrator | 2025-03-11 00:11:38.264434 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-11 00:11:38.264814 | orchestrator | Tuesday 11 March 2025 00:11:38 +0000 (0:00:00.857) 0:05:39.773 ********* 2025-03-11 00:11:38.330454 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:38.373387 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:38.415664 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:38.470491 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:38.502324 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:38.592736 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:38.594187 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:38.594295 | orchestrator | 2025-03-11 00:11:38.594597 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-11 00:11:38.594882 | orchestrator | Tuesday 11 March 2025 00:11:38 +0000 (0:00:00.342) 0:05:40.116 ********* 2025-03-11 00:11:38.670736 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:38.746240 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:38.784425 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:38.830333 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:39.065580 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:39.066660 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:39.068246 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:39.069246 | orchestrator | 2025-03-11 00:11:39.070519 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-11 00:11:39.071213 | orchestrator | Tuesday 11 March 2025 00:11:39 +0000 (0:00:00.472) 0:05:40.588 ********* 2025-03-11 00:11:39.177936 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:39.219504 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:39.258817 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:39.296953 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:39.381414 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:39.382310 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:39.382690 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:39.383081 | orchestrator | 2025-03-11 00:11:39.384058 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-11 00:11:39.384529 | orchestrator | Tuesday 11 March 2025 00:11:39 +0000 (0:00:00.316) 0:05:40.905 ********* 2025-03-11 00:11:39.524228 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:39.562258 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:39.596605 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:39.631460 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:39.709659 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:39.711531 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:39.714115 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:39.714623 | orchestrator | 2025-03-11 00:11:39.714657 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-11 00:11:39.715204 | orchestrator | Tuesday 11 March 2025 00:11:39 +0000 (0:00:00.325) 0:05:41.231 ********* 2025-03-11 00:11:39.837163 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:39.873448 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:39.924619 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:39.978262 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:40.065624 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:40.066605 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:40.068173 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:40.068383 | orchestrator | 2025-03-11 00:11:40.070389 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-11 00:11:40.071182 | orchestrator | Tuesday 11 March 2025 00:11:40 +0000 (0:00:00.358) 0:05:41.589 ********* 2025-03-11 00:11:40.165797 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:40.207058 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:40.248099 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:40.288486 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:40.332815 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:40.397916 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:40.398603 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:40.399138 | orchestrator | 2025-03-11 00:11:40.399625 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-11 00:11:40.400255 | orchestrator | Tuesday 11 March 2025 00:11:40 +0000 (0:00:00.332) 0:05:41.922 ********* 2025-03-11 00:11:40.488606 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:40.525471 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:40.562392 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:40.595912 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:40.647258 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:40.836610 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:40.837508 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:40.838322 | orchestrator | 2025-03-11 00:11:40.839259 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-11 00:11:40.839891 | orchestrator | Tuesday 11 March 2025 00:11:40 +0000 (0:00:00.438) 0:05:42.360 ********* 2025-03-11 00:11:41.349398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:11:41.349631 | orchestrator | 2025-03-11 00:11:41.349664 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-11 00:11:41.350210 | orchestrator | Tuesday 11 March 2025 00:11:41 +0000 (0:00:00.511) 0:05:42.871 ********* 2025-03-11 00:11:42.239248 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:42.239423 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:42.239446 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:42.240466 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:42.240655 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:42.241312 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:42.241721 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:42.242227 | orchestrator | 2025-03-11 00:11:42.242653 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-11 00:11:42.246380 | orchestrator | Tuesday 11 March 2025 00:11:42 +0000 (0:00:00.889) 0:05:43.761 ********* 2025-03-11 00:11:45.337477 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:11:45.337977 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:11:45.338096 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:11:45.338164 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:11:45.338823 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:11:45.339515 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:11:45.340551 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:45.340620 | orchestrator | 2025-03-11 00:11:45.340642 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-11 00:11:45.341170 | orchestrator | Tuesday 11 March 2025 00:11:45 +0000 (0:00:03.100) 0:05:46.861 ********* 2025-03-11 00:11:45.421214 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-11 00:11:45.422252 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-11 00:11:45.520235 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-11 00:11:45.520616 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-11 00:11:45.521805 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-11 00:11:45.522351 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-11 00:11:45.592351 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:11:45.592747 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-11 00:11:45.593806 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-11 00:11:45.594313 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-11 00:11:45.682611 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:11:45.683024 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-11 00:11:45.683544 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-11 00:11:45.684122 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-11 00:11:45.773919 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:11:45.774243 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-11 00:11:45.775218 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-11 00:11:45.775650 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-11 00:11:45.853793 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:11:45.854455 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-11 00:11:45.854894 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-11 00:11:45.855385 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-11 00:11:46.004945 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:11:46.005103 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:11:46.005135 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-11 00:11:46.006474 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-11 00:11:46.006504 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-11 00:11:46.006914 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:11:46.007188 | orchestrator | 2025-03-11 00:11:46.007379 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-11 00:11:46.007646 | orchestrator | Tuesday 11 March 2025 00:11:46 +0000 (0:00:00.664) 0:05:47.526 ********* 2025-03-11 00:11:52.501022 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:52.501224 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:52.503656 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:52.505441 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:52.510823 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:52.512152 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:52.512650 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:52.513531 | orchestrator | 2025-03-11 00:11:52.514526 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-11 00:11:52.515319 | orchestrator | Tuesday 11 March 2025 00:11:52 +0000 (0:00:06.496) 0:05:54.022 ********* 2025-03-11 00:11:53.612019 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:11:53.612253 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:11:53.612681 | orchestrator | ok: [testbed-manager] 2025-03-11 00:11:53.613163 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:11:53.613597 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:11:53.613972 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:11:53.614217 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:11:53.614746 | orchestrator | 2025-03-11 00:11:53.615360 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-11 00:11:53.615638 | orchestrator | Tuesday 11 March 2025 00:11:53 +0000 (0:00:01.109) 0:05:55.132 ********* 2025-03-11 00:12:01.034731 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:01.035032 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:01.035652 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:01.035700 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:01.039146 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:01.039235 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:04.333955 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:04.334132 | orchestrator | 2025-03-11 00:12:04.334155 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-11 00:12:04.334171 | orchestrator | Tuesday 11 March 2025 00:12:01 +0000 (0:00:07.420) 0:06:02.553 ********* 2025-03-11 00:12:04.334202 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:04.334276 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:04.335169 | orchestrator | changed: [testbed-manager] 2025-03-11 00:12:04.335363 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:04.336243 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:04.336643 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:04.337359 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:04.337855 | orchestrator | 2025-03-11 00:12:04.339451 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-11 00:12:04.340719 | orchestrator | Tuesday 11 March 2025 00:12:04 +0000 (0:00:03.302) 0:06:05.856 ********* 2025-03-11 00:12:05.937222 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:05.937389 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:05.938547 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:05.938946 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:05.940056 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:05.941135 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:05.941620 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:05.942390 | orchestrator | 2025-03-11 00:12:05.942883 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-11 00:12:05.943511 | orchestrator | Tuesday 11 March 2025 00:12:05 +0000 (0:00:01.602) 0:06:07.458 ********* 2025-03-11 00:12:07.334376 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:07.337038 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:07.337078 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:07.337403 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:07.338180 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:07.338333 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:07.338875 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:07.341741 | orchestrator | 2025-03-11 00:12:07.342727 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-11 00:12:07.344183 | orchestrator | Tuesday 11 March 2025 00:12:07 +0000 (0:00:01.393) 0:06:08.852 ********* 2025-03-11 00:12:07.560985 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:07.635394 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:07.713965 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:07.789964 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:08.020449 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:08.020798 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:08.020833 | orchestrator | changed: [testbed-manager] 2025-03-11 00:12:08.021375 | orchestrator | 2025-03-11 00:12:08.021474 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-11 00:12:08.021958 | orchestrator | Tuesday 11 March 2025 00:12:08 +0000 (0:00:00.690) 0:06:09.543 ********* 2025-03-11 00:12:17.453740 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:17.454107 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:17.455408 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:17.456461 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:17.459517 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:17.460532 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:17.460989 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:17.462733 | orchestrator | 2025-03-11 00:12:17.463196 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-11 00:12:17.463227 | orchestrator | Tuesday 11 March 2025 00:12:17 +0000 (0:00:09.432) 0:06:18.975 ********* 2025-03-11 00:12:18.458610 | orchestrator | changed: [testbed-manager] 2025-03-11 00:12:18.459943 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:18.460916 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:18.462157 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:18.464129 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:18.465087 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:18.466060 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:18.467050 | orchestrator | 2025-03-11 00:12:18.468106 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-11 00:12:18.468630 | orchestrator | Tuesday 11 March 2025 00:12:18 +0000 (0:00:01.003) 0:06:19.978 ********* 2025-03-11 00:12:30.627249 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:30.628616 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:30.628657 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:30.628672 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:30.628687 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:30.628709 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:30.629031 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:30.629086 | orchestrator | 2025-03-11 00:12:30.629613 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-11 00:12:30.631949 | orchestrator | Tuesday 11 March 2025 00:12:30 +0000 (0:00:12.164) 0:06:32.143 ********* 2025-03-11 00:12:43.621061 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:43.621663 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:43.621713 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:43.624604 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:43.624643 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:43.625647 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:43.626481 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:43.627128 | orchestrator | 2025-03-11 00:12:43.628150 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-11 00:12:43.630309 | orchestrator | Tuesday 11 March 2025 00:12:43 +0000 (0:00:12.997) 0:06:45.140 ********* 2025-03-11 00:12:43.999873 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-11 00:12:44.098327 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-11 00:12:44.879565 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-11 00:12:44.880804 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-11 00:12:44.880963 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-11 00:12:44.880987 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-11 00:12:44.881006 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-11 00:12:44.881767 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-11 00:12:44.883661 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-11 00:12:44.884075 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-11 00:12:44.884103 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-11 00:12:44.884621 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-11 00:12:44.885203 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-11 00:12:44.886147 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-11 00:12:44.887340 | orchestrator | 2025-03-11 00:12:44.887510 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-11 00:12:44.888834 | orchestrator | Tuesday 11 March 2025 00:12:44 +0000 (0:00:01.259) 0:06:46.400 ********* 2025-03-11 00:12:45.039167 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:45.108395 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:45.182821 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:45.260182 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:45.328913 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:45.468383 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:45.470953 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:45.471879 | orchestrator | 2025-03-11 00:12:45.471910 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-11 00:12:45.473256 | orchestrator | Tuesday 11 March 2025 00:12:45 +0000 (0:00:00.589) 0:06:46.989 ********* 2025-03-11 00:12:49.763093 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:49.764135 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:49.767611 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:49.768116 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:49.768612 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:49.769155 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:49.769832 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:49.770185 | orchestrator | 2025-03-11 00:12:49.771007 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-11 00:12:49.771713 | orchestrator | Tuesday 11 March 2025 00:12:49 +0000 (0:00:04.295) 0:06:51.285 ********* 2025-03-11 00:12:49.907233 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:50.184236 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:50.255792 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:50.326446 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:50.410832 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:50.521476 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:50.521961 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:50.523302 | orchestrator | 2025-03-11 00:12:50.523637 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-11 00:12:50.524274 | orchestrator | Tuesday 11 March 2025 00:12:50 +0000 (0:00:00.758) 0:06:52.044 ********* 2025-03-11 00:12:50.612338 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-11 00:12:50.612485 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-11 00:12:50.695475 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:50.695649 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-11 00:12:50.765847 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-11 00:12:50.765928 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:50.765981 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-11 00:12:50.766000 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-11 00:12:50.843106 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:50.844052 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-11 00:12:50.845276 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-11 00:12:50.930097 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:50.931358 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-11 00:12:50.932674 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-11 00:12:51.009187 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:51.009348 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-11 00:12:51.010973 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-11 00:12:51.170987 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:51.171172 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-11 00:12:51.172093 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-11 00:12:51.173255 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:51.174271 | orchestrator | 2025-03-11 00:12:51.175102 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-11 00:12:51.176028 | orchestrator | Tuesday 11 March 2025 00:12:51 +0000 (0:00:00.647) 0:06:52.692 ********* 2025-03-11 00:12:51.328061 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:51.395373 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:51.484347 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:51.557471 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:51.623831 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:51.756878 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:51.756993 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:51.757688 | orchestrator | 2025-03-11 00:12:51.766131 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-11 00:12:51.901485 | orchestrator | Tuesday 11 March 2025 00:12:51 +0000 (0:00:00.588) 0:06:53.281 ********* 2025-03-11 00:12:51.901541 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:51.980858 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:52.050143 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:52.114482 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:52.188319 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:52.304614 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:52.305714 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:52.306402 | orchestrator | 2025-03-11 00:12:52.310108 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-11 00:12:52.310897 | orchestrator | Tuesday 11 March 2025 00:12:52 +0000 (0:00:00.545) 0:06:53.826 ********* 2025-03-11 00:12:52.451915 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:12:52.535579 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:12:52.609943 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:12:52.681896 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:12:52.748207 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:12:52.869223 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:12:52.873716 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:12:52.875997 | orchestrator | 2025-03-11 00:12:52.876020 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-11 00:12:52.876037 | orchestrator | Tuesday 11 March 2025 00:12:52 +0000 (0:00:00.563) 0:06:54.390 ********* 2025-03-11 00:12:59.117307 | orchestrator | ok: [testbed-manager] 2025-03-11 00:12:59.117815 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:12:59.117852 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:12:59.117874 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:12:59.118668 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:12:59.120369 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:12:59.121089 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:12:59.121404 | orchestrator | 2025-03-11 00:12:59.122185 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-11 00:12:59.122844 | orchestrator | Tuesday 11 March 2025 00:12:59 +0000 (0:00:06.249) 0:07:00.639 ********* 2025-03-11 00:13:00.060851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:00.062655 | orchestrator | 2025-03-11 00:13:00.066255 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-11 00:13:00.924554 | orchestrator | Tuesday 11 March 2025 00:13:00 +0000 (0:00:00.943) 0:07:01.582 ********* 2025-03-11 00:13:00.924781 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:00.925018 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:00.926111 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:00.926818 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:00.927491 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:00.928310 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:00.929378 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:00.930104 | orchestrator | 2025-03-11 00:13:00.930839 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-11 00:13:00.931700 | orchestrator | Tuesday 11 March 2025 00:13:00 +0000 (0:00:00.865) 0:07:02.448 ********* 2025-03-11 00:13:01.627018 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:02.061630 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:02.062098 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:02.062171 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:02.062239 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:02.062612 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:02.063268 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:02.063377 | orchestrator | 2025-03-11 00:13:02.064241 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-11 00:13:02.064311 | orchestrator | Tuesday 11 March 2025 00:13:02 +0000 (0:00:01.135) 0:07:03.583 ********* 2025-03-11 00:13:03.426851 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:03.427032 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:03.427903 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:03.429259 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:03.429791 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:03.430632 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:03.432869 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:03.433523 | orchestrator | 2025-03-11 00:13:03.434312 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-11 00:13:03.434810 | orchestrator | Tuesday 11 March 2025 00:13:03 +0000 (0:00:01.362) 0:07:04.946 ********* 2025-03-11 00:13:03.571670 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:04.955594 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:04.956820 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:04.958178 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:04.959300 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:04.960400 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:04.962316 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:06.350122 | orchestrator | 2025-03-11 00:13:06.350270 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-11 00:13:06.350287 | orchestrator | Tuesday 11 March 2025 00:13:04 +0000 (0:00:01.532) 0:07:06.479 ********* 2025-03-11 00:13:06.350314 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:06.350655 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:06.351412 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:06.351512 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:06.354062 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:06.354710 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:06.354883 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:06.355556 | orchestrator | 2025-03-11 00:13:06.355725 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-11 00:13:06.356160 | orchestrator | Tuesday 11 March 2025 00:13:06 +0000 (0:00:01.390) 0:07:07.869 ********* 2025-03-11 00:13:07.904327 | orchestrator | changed: [testbed-manager] 2025-03-11 00:13:07.904930 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:07.904962 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:07.905854 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:07.906494 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:07.907684 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:07.909887 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:07.911382 | orchestrator | 2025-03-11 00:13:07.912206 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-11 00:13:07.913051 | orchestrator | Tuesday 11 March 2025 00:13:07 +0000 (0:00:01.554) 0:07:09.423 ********* 2025-03-11 00:13:09.084033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:09.084203 | orchestrator | 2025-03-11 00:13:09.084494 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-11 00:13:09.084871 | orchestrator | Tuesday 11 March 2025 00:13:09 +0000 (0:00:01.182) 0:07:10.606 ********* 2025-03-11 00:13:10.584522 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:10.585058 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:10.585869 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:10.587685 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:10.589768 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:10.590167 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:10.590196 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:10.590703 | orchestrator | 2025-03-11 00:13:10.591538 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-11 00:13:10.592080 | orchestrator | Tuesday 11 March 2025 00:13:10 +0000 (0:00:01.499) 0:07:12.106 ********* 2025-03-11 00:13:11.796575 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:11.796832 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:11.796865 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:11.797926 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:11.798817 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:11.801326 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:11.801755 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:11.802347 | orchestrator | 2025-03-11 00:13:11.802914 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-11 00:13:11.803620 | orchestrator | Tuesday 11 March 2025 00:13:11 +0000 (0:00:01.208) 0:07:13.314 ********* 2025-03-11 00:13:13.073452 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:13.076317 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:13.077122 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:13.078253 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:13.078794 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:13.079443 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:13.082518 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:13.082667 | orchestrator | 2025-03-11 00:13:13.083445 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-11 00:13:13.083539 | orchestrator | Tuesday 11 March 2025 00:13:13 +0000 (0:00:01.275) 0:07:14.590 ********* 2025-03-11 00:13:14.547827 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:14.548003 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:14.548360 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:14.548879 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:14.549014 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:14.549620 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:14.549856 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:14.550355 | orchestrator | 2025-03-11 00:13:14.554989 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-11 00:13:14.555029 | orchestrator | Tuesday 11 March 2025 00:13:14 +0000 (0:00:01.480) 0:07:16.071 ********* 2025-03-11 00:13:15.974610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:15.975015 | orchestrator | 2025-03-11 00:13:15.976090 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.976857 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:01.055) 0:07:17.127 ********* 2025-03-11 00:13:15.977927 | orchestrator | 2025-03-11 00:13:15.982069 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.983980 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.042) 0:07:17.169 ********* 2025-03-11 00:13:15.984004 | orchestrator | 2025-03-11 00:13:15.984019 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.984033 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.054) 0:07:17.223 ********* 2025-03-11 00:13:15.984051 | orchestrator | 2025-03-11 00:13:15.984693 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.985705 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.065) 0:07:17.289 ********* 2025-03-11 00:13:15.986403 | orchestrator | 2025-03-11 00:13:15.987070 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.987878 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.040) 0:07:17.330 ********* 2025-03-11 00:13:15.988664 | orchestrator | 2025-03-11 00:13:15.988827 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.989398 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.052) 0:07:17.383 ********* 2025-03-11 00:13:15.989948 | orchestrator | 2025-03-11 00:13:15.990433 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-11 00:13:15.990940 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.042) 0:07:17.425 ********* 2025-03-11 00:13:15.991551 | orchestrator | 2025-03-11 00:13:15.992016 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-11 00:13:15.992350 | orchestrator | Tuesday 11 March 2025 00:13:15 +0000 (0:00:00.069) 0:07:17.494 ********* 2025-03-11 00:13:17.377642 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:17.377942 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:17.377970 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:17.377990 | orchestrator | 2025-03-11 00:13:17.378552 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-11 00:13:17.378870 | orchestrator | Tuesday 11 March 2025 00:13:17 +0000 (0:00:01.403) 0:07:18.898 ********* 2025-03-11 00:13:19.078762 | orchestrator | changed: [testbed-manager] 2025-03-11 00:13:19.079594 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:19.080182 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:19.083044 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:19.083668 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:19.083750 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:19.083767 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:19.083781 | orchestrator | 2025-03-11 00:13:19.083796 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-11 00:13:19.083817 | orchestrator | Tuesday 11 March 2025 00:13:19 +0000 (0:00:01.700) 0:07:20.599 ********* 2025-03-11 00:13:20.276454 | orchestrator | changed: [testbed-manager] 2025-03-11 00:13:20.277115 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:20.278201 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:20.278565 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:20.279159 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:20.280168 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:20.283627 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:20.432346 | orchestrator | 2025-03-11 00:13:20.432424 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-11 00:13:20.432440 | orchestrator | Tuesday 11 March 2025 00:13:20 +0000 (0:00:01.196) 0:07:21.795 ********* 2025-03-11 00:13:20.432465 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:22.457322 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:22.457564 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:22.458206 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:22.458257 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:22.458275 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:22.458299 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:22.459184 | orchestrator | 2025-03-11 00:13:22.566183 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-11 00:13:22.566231 | orchestrator | Tuesday 11 March 2025 00:13:22 +0000 (0:00:02.179) 0:07:23.975 ********* 2025-03-11 00:13:22.566257 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:22.567026 | orchestrator | 2025-03-11 00:13:22.567422 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-11 00:13:22.567455 | orchestrator | Tuesday 11 March 2025 00:13:22 +0000 (0:00:00.113) 0:07:24.089 ********* 2025-03-11 00:13:23.583897 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:23.584051 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:23.584456 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:23.586389 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:23.586534 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:23.587397 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:23.588355 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:23.588957 | orchestrator | 2025-03-11 00:13:23.590001 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-11 00:13:23.590406 | orchestrator | Tuesday 11 March 2025 00:13:23 +0000 (0:00:01.015) 0:07:25.105 ********* 2025-03-11 00:13:23.746563 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:23.824967 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:23.919928 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:24.224461 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:24.298783 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:24.417359 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:24.417765 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:24.419105 | orchestrator | 2025-03-11 00:13:24.420878 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-11 00:13:25.445230 | orchestrator | Tuesday 11 March 2025 00:13:24 +0000 (0:00:00.834) 0:07:25.939 ********* 2025-03-11 00:13:25.445374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:25.445478 | orchestrator | 2025-03-11 00:13:25.446541 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-11 00:13:25.446948 | orchestrator | Tuesday 11 March 2025 00:13:25 +0000 (0:00:01.028) 0:07:26.968 ********* 2025-03-11 00:13:25.879099 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:26.341990 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:26.343358 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:26.343400 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:26.344739 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:26.345302 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:26.348247 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:26.348829 | orchestrator | 2025-03-11 00:13:26.349394 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-11 00:13:26.350305 | orchestrator | Tuesday 11 March 2025 00:13:26 +0000 (0:00:00.896) 0:07:27.864 ********* 2025-03-11 00:13:29.256648 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-11 00:13:29.259653 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-11 00:13:29.261166 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-11 00:13:29.261179 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-11 00:13:29.261184 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-11 00:13:29.261192 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-11 00:13:29.261436 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-11 00:13:29.262625 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-11 00:13:29.263175 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-11 00:13:29.264685 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-11 00:13:29.265127 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-11 00:13:29.266333 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-11 00:13:29.267124 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-11 00:13:29.268235 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-11 00:13:29.268919 | orchestrator | 2025-03-11 00:13:29.269494 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-11 00:13:29.270096 | orchestrator | Tuesday 11 March 2025 00:13:29 +0000 (0:00:02.911) 0:07:30.776 ********* 2025-03-11 00:13:29.426841 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:29.498391 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:29.571053 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:29.637088 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:29.705847 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:29.809702 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:29.810407 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:29.814160 | orchestrator | 2025-03-11 00:13:29.815247 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-11 00:13:29.816291 | orchestrator | Tuesday 11 March 2025 00:13:29 +0000 (0:00:00.555) 0:07:31.332 ********* 2025-03-11 00:13:30.715212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:30.715354 | orchestrator | 2025-03-11 00:13:30.716107 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-11 00:13:30.716860 | orchestrator | Tuesday 11 March 2025 00:13:30 +0000 (0:00:00.903) 0:07:32.235 ********* 2025-03-11 00:13:31.181155 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:31.860084 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:31.860377 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:31.860422 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:31.861313 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:31.862060 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:31.862356 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:31.865239 | orchestrator | 2025-03-11 00:13:32.732502 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-11 00:13:32.732622 | orchestrator | Tuesday 11 March 2025 00:13:31 +0000 (0:00:01.147) 0:07:33.382 ********* 2025-03-11 00:13:32.732766 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:32.732853 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:32.733296 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:32.733656 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:32.734993 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:32.736037 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:32.736702 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:32.736755 | orchestrator | 2025-03-11 00:13:32.737230 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-11 00:13:32.737845 | orchestrator | Tuesday 11 March 2025 00:13:32 +0000 (0:00:00.867) 0:07:34.250 ********* 2025-03-11 00:13:32.886594 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:32.969854 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:33.049094 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:33.126890 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:33.207459 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:33.318510 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:33.319037 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:33.320005 | orchestrator | 2025-03-11 00:13:33.320686 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-11 00:13:33.321900 | orchestrator | Tuesday 11 March 2025 00:13:33 +0000 (0:00:00.588) 0:07:34.839 ********* 2025-03-11 00:13:34.805041 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:34.805243 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:34.806137 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:34.806263 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:34.807327 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:34.808200 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:34.810373 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:34.810908 | orchestrator | 2025-03-11 00:13:34.811844 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-11 00:13:34.811957 | orchestrator | Tuesday 11 March 2025 00:13:34 +0000 (0:00:01.489) 0:07:36.328 ********* 2025-03-11 00:13:34.948168 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:35.030889 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:35.104120 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:35.172537 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:35.247757 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:35.345621 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:35.349888 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:35.351953 | orchestrator | 2025-03-11 00:13:35.357342 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-11 00:13:35.357831 | orchestrator | Tuesday 11 March 2025 00:13:35 +0000 (0:00:00.535) 0:07:36.864 ********* 2025-03-11 00:13:37.658447 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:37.658667 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:37.658768 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:37.658831 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:37.659910 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:37.664700 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:37.666100 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:37.666133 | orchestrator | 2025-03-11 00:13:37.668817 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-11 00:13:39.078389 | orchestrator | Tuesday 11 March 2025 00:13:37 +0000 (0:00:02.315) 0:07:39.179 ********* 2025-03-11 00:13:39.078514 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:39.079132 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:39.079679 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:39.080499 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:39.080681 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:39.081756 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:39.087122 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:40.991438 | orchestrator | 2025-03-11 00:13:40.991598 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-11 00:13:40.991650 | orchestrator | Tuesday 11 March 2025 00:13:39 +0000 (0:00:01.422) 0:07:40.601 ********* 2025-03-11 00:13:40.991686 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:40.991874 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:40.992566 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:40.993430 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:40.994453 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:40.996208 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:40.997464 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:40.998143 | orchestrator | 2025-03-11 00:13:40.998935 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-11 00:13:41.000023 | orchestrator | Tuesday 11 March 2025 00:13:40 +0000 (0:00:01.908) 0:07:42.510 ********* 2025-03-11 00:13:42.768011 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:42.771463 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:13:42.771818 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:13:42.772018 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:13:42.772043 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:13:42.772062 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:13:42.772860 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:13:42.775156 | orchestrator | 2025-03-11 00:13:42.780469 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-11 00:13:43.490751 | orchestrator | Tuesday 11 March 2025 00:13:42 +0000 (0:00:01.776) 0:07:44.286 ********* 2025-03-11 00:13:43.490916 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:43.567453 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:44.043172 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:44.051603 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:44.051919 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:44.051949 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:44.051989 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:44.052004 | orchestrator | 2025-03-11 00:13:44.052020 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-11 00:13:44.052041 | orchestrator | Tuesday 11 March 2025 00:13:44 +0000 (0:00:01.277) 0:07:45.563 ********* 2025-03-11 00:13:44.179437 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:44.273121 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:44.344180 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:44.410003 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:44.488936 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:44.955171 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:44.956213 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:44.956610 | orchestrator | 2025-03-11 00:13:44.956643 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-11 00:13:44.957198 | orchestrator | Tuesday 11 March 2025 00:13:44 +0000 (0:00:00.913) 0:07:46.477 ********* 2025-03-11 00:13:45.116631 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:45.187873 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:45.271137 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:45.340498 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:45.411276 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:45.556174 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:45.557041 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:45.559952 | orchestrator | 2025-03-11 00:13:45.561824 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-11 00:13:45.562187 | orchestrator | Tuesday 11 March 2025 00:13:45 +0000 (0:00:00.599) 0:07:47.077 ********* 2025-03-11 00:13:45.704301 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:45.783947 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:45.861268 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:45.929029 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:46.005515 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:46.127010 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:46.128442 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:46.128481 | orchestrator | 2025-03-11 00:13:46.129539 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-11 00:13:46.130914 | orchestrator | Tuesday 11 March 2025 00:13:46 +0000 (0:00:00.569) 0:07:47.647 ********* 2025-03-11 00:13:46.506216 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:46.578524 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:46.647134 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:46.729693 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:46.803167 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:46.914414 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:46.914850 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:46.916166 | orchestrator | 2025-03-11 00:13:46.919349 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-11 00:13:46.919955 | orchestrator | Tuesday 11 March 2025 00:13:46 +0000 (0:00:00.788) 0:07:48.436 ********* 2025-03-11 00:13:47.085118 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:47.152084 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:47.229625 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:47.309619 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:47.388176 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:47.542309 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:47.542473 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:47.543221 | orchestrator | 2025-03-11 00:13:47.543943 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-11 00:13:47.544468 | orchestrator | Tuesday 11 March 2025 00:13:47 +0000 (0:00:00.631) 0:07:49.067 ********* 2025-03-11 00:13:53.437809 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:53.438547 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:53.439454 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:53.440989 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:53.441540 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:53.442497 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:53.443031 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:53.443567 | orchestrator | 2025-03-11 00:13:53.444872 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-11 00:13:53.446134 | orchestrator | Tuesday 11 March 2025 00:13:53 +0000 (0:00:05.892) 0:07:54.959 ********* 2025-03-11 00:13:53.600057 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:13:53.666844 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:13:53.736479 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:13:53.827154 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:13:53.911088 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:13:54.052338 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:13:54.053016 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:13:54.053133 | orchestrator | 2025-03-11 00:13:54.053657 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-11 00:13:54.054576 | orchestrator | Tuesday 11 March 2025 00:13:54 +0000 (0:00:00.615) 0:07:55.575 ********* 2025-03-11 00:13:55.235116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:13:55.235301 | orchestrator | 2025-03-11 00:13:55.235611 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-11 00:13:55.236193 | orchestrator | Tuesday 11 March 2025 00:13:55 +0000 (0:00:01.181) 0:07:56.756 ********* 2025-03-11 00:13:57.161184 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:57.163310 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:57.165321 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:57.165366 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:57.165377 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:57.165390 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:57.165428 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:57.166396 | orchestrator | 2025-03-11 00:13:57.167088 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-11 00:13:57.168076 | orchestrator | Tuesday 11 March 2025 00:13:57 +0000 (0:00:01.924) 0:07:58.681 ********* 2025-03-11 00:13:58.363196 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:58.363363 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:58.365235 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:58.365627 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:58.365658 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:58.366413 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:58.366932 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:58.367756 | orchestrator | 2025-03-11 00:13:58.368230 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-11 00:13:58.368852 | orchestrator | Tuesday 11 March 2025 00:13:58 +0000 (0:00:01.204) 0:07:59.885 ********* 2025-03-11 00:13:58.828741 | orchestrator | ok: [testbed-manager] 2025-03-11 00:13:59.271892 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:13:59.272104 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:13:59.272135 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:13:59.273910 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:13:59.275043 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:13:59.275401 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:13:59.276036 | orchestrator | 2025-03-11 00:13:59.276674 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-11 00:13:59.277314 | orchestrator | Tuesday 11 March 2025 00:13:59 +0000 (0:00:00.905) 0:08:00.791 ********* 2025-03-11 00:14:01.295503 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.296111 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.297339 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.300151 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.300771 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.301560 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.301997 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-11 00:14:01.302666 | orchestrator | 2025-03-11 00:14:01.302995 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-11 00:14:01.303659 | orchestrator | Tuesday 11 March 2025 00:14:01 +0000 (0:00:02.025) 0:08:02.817 ********* 2025-03-11 00:14:02.175676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:14:02.175925 | orchestrator | 2025-03-11 00:14:02.179436 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-11 00:14:02.179874 | orchestrator | Tuesday 11 March 2025 00:14:02 +0000 (0:00:00.879) 0:08:03.696 ********* 2025-03-11 00:14:11.985161 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:11.986504 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:11.989068 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:11.989099 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:11.989835 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:11.990222 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:11.991205 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:11.992393 | orchestrator | 2025-03-11 00:14:11.992798 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-11 00:14:11.993323 | orchestrator | Tuesday 11 March 2025 00:14:11 +0000 (0:00:09.806) 0:08:13.502 ********* 2025-03-11 00:14:14.905874 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:14.906159 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:14.906190 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:14.906212 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:14.907342 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:14.912224 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:14.912356 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:14.912722 | orchestrator | 2025-03-11 00:14:14.913370 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-11 00:14:14.913757 | orchestrator | Tuesday 11 March 2025 00:14:14 +0000 (0:00:02.923) 0:08:16.426 ********* 2025-03-11 00:14:16.324736 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:16.324864 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:16.325986 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:16.326825 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:16.327790 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:16.328618 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:16.329679 | orchestrator | 2025-03-11 00:14:16.330796 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-11 00:14:16.331715 | orchestrator | Tuesday 11 March 2025 00:14:16 +0000 (0:00:01.415) 0:08:17.842 ********* 2025-03-11 00:14:17.905403 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:17.906274 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:17.906323 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:17.906876 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:17.908915 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:17.910125 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:17.911119 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:17.911913 | orchestrator | 2025-03-11 00:14:17.912883 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-11 00:14:17.913822 | orchestrator | 2025-03-11 00:14:17.914135 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-11 00:14:17.914664 | orchestrator | Tuesday 11 March 2025 00:14:17 +0000 (0:00:01.585) 0:08:19.427 ********* 2025-03-11 00:14:18.046258 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:14:18.107479 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:14:18.194559 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:14:18.262433 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:14:18.334913 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:14:18.502519 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:14:18.502663 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:14:18.503901 | orchestrator | 2025-03-11 00:14:18.504269 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-11 00:14:18.504884 | orchestrator | 2025-03-11 00:14:18.505159 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-11 00:14:18.505528 | orchestrator | Tuesday 11 March 2025 00:14:18 +0000 (0:00:00.598) 0:08:20.026 ********* 2025-03-11 00:14:19.887258 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:19.888320 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:19.888352 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:19.888373 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:19.888584 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:19.889488 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:19.890124 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:19.890184 | orchestrator | 2025-03-11 00:14:19.890242 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-11 00:14:19.890615 | orchestrator | Tuesday 11 March 2025 00:14:19 +0000 (0:00:01.379) 0:08:21.406 ********* 2025-03-11 00:14:21.476658 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:21.478660 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:21.478735 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:21.478792 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:21.479209 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:21.480418 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:21.481099 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:21.482853 | orchestrator | 2025-03-11 00:14:21.484811 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-11 00:14:21.487424 | orchestrator | Tuesday 11 March 2025 00:14:21 +0000 (0:00:01.590) 0:08:22.996 ********* 2025-03-11 00:14:21.616277 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:14:21.946938 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:14:22.014220 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:14:22.080248 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:14:22.152496 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:14:22.597230 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:14:22.602153 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:14:22.602232 | orchestrator | 2025-03-11 00:14:22.602981 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-11 00:14:22.603831 | orchestrator | Tuesday 11 March 2025 00:14:22 +0000 (0:00:01.121) 0:08:24.118 ********* 2025-03-11 00:14:23.893343 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:23.894175 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:23.894216 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:23.896291 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:23.897122 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:23.897156 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:23.897846 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:23.898363 | orchestrator | 2025-03-11 00:14:23.899510 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-11 00:14:23.899820 | orchestrator | 2025-03-11 00:14:23.900898 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-11 00:14:23.901403 | orchestrator | Tuesday 11 March 2025 00:14:23 +0000 (0:00:01.299) 0:08:25.417 ********* 2025-03-11 00:14:24.977134 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:14:24.977903 | orchestrator | 2025-03-11 00:14:24.977951 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-11 00:14:24.977980 | orchestrator | Tuesday 11 March 2025 00:14:24 +0000 (0:00:01.078) 0:08:26.495 ********* 2025-03-11 00:14:25.873033 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:25.873444 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:25.873493 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:25.874512 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:25.875435 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:25.875777 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:25.876431 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:25.878321 | orchestrator | 2025-03-11 00:14:27.192854 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-11 00:14:27.192991 | orchestrator | Tuesday 11 March 2025 00:14:25 +0000 (0:00:00.899) 0:08:27.394 ********* 2025-03-11 00:14:27.193027 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:27.193621 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:27.193662 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:27.194365 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:27.195454 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:27.196496 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:27.196923 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:27.197809 | orchestrator | 2025-03-11 00:14:27.198391 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-11 00:14:27.199100 | orchestrator | Tuesday 11 March 2025 00:14:27 +0000 (0:00:01.318) 0:08:28.713 ********* 2025-03-11 00:14:28.364202 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 00:14:28.364542 | orchestrator | 2025-03-11 00:14:28.365309 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-11 00:14:28.366083 | orchestrator | Tuesday 11 March 2025 00:14:28 +0000 (0:00:01.172) 0:08:29.885 ********* 2025-03-11 00:14:28.950234 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:29.427651 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:29.428324 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:29.429066 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:29.430287 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:29.432171 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:29.432228 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:29.433048 | orchestrator | 2025-03-11 00:14:29.433083 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-11 00:14:29.433796 | orchestrator | Tuesday 11 March 2025 00:14:29 +0000 (0:00:01.065) 0:08:30.951 ********* 2025-03-11 00:14:29.864062 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:30.709323 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:30.709969 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:30.711595 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:30.713735 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:30.713840 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:30.715152 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:30.715753 | orchestrator | 2025-03-11 00:14:30.716232 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:14:30.716757 | orchestrator | 2025-03-11 00:14:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:14:30.717805 | orchestrator | 2025-03-11 00:14:30 | INFO  | Please wait and do not abort execution. 2025-03-11 00:14:30.717842 | orchestrator | testbed-manager : ok=160  changed=39  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-11 00:14:30.718663 | orchestrator | testbed-node-0 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-11 00:14:30.719629 | orchestrator | testbed-node-1 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-11 00:14:30.720708 | orchestrator | testbed-node-2 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-11 00:14:30.721214 | orchestrator | testbed-node-3 : ok=167  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-11 00:14:30.722178 | orchestrator | testbed-node-4 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-11 00:14:30.723121 | orchestrator | testbed-node-5 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-11 00:14:30.723777 | orchestrator | 2025-03-11 00:14:30.724564 | orchestrator | Tuesday 11 March 2025 00:14:30 +0000 (0:00:01.279) 0:08:32.231 ********* 2025-03-11 00:14:30.725319 | orchestrator | =============================================================================== 2025-03-11 00:14:30.726207 | orchestrator | osism.commons.packages : Install required packages --------------------- 94.36s 2025-03-11 00:14:30.726734 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.60s 2025-03-11 00:14:30.726922 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.08s 2025-03-11 00:14:30.727408 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 15.52s 2025-03-11 00:14:30.727805 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.86s 2025-03-11 00:14:30.728135 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.60s 2025-03-11 00:14:30.729015 | orchestrator | osism.services.docker : Install docker package ------------------------- 13.00s 2025-03-11 00:14:30.729237 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.16s 2025-03-11 00:14:30.729601 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.81s 2025-03-11 00:14:30.730077 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.43s 2025-03-11 00:14:30.730405 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.10s 2025-03-11 00:14:30.730753 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.14s 2025-03-11 00:14:30.731048 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.70s 2025-03-11 00:14:30.731433 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.42s 2025-03-11 00:14:30.731949 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.23s 2025-03-11 00:14:30.732304 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.50s 2025-03-11 00:14:30.732662 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.29s 2025-03-11 00:14:30.733015 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.25s 2025-03-11 00:14:30.733396 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.00s 2025-03-11 00:14:30.733913 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.90s 2025-03-11 00:14:31.632122 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-11 00:14:33.906937 | orchestrator | + osism apply network 2025-03-11 00:14:33.907095 | orchestrator | 2025-03-11 00:14:33 | INFO  | Task 4fba3167-5b0a-406b-b7c8-692f2e365a96 (network) was prepared for execution. 2025-03-11 00:14:37.839299 | orchestrator | 2025-03-11 00:14:33 | INFO  | It takes a moment until task 4fba3167-5b0a-406b-b7c8-692f2e365a96 (network) has been started and output is visible here. 2025-03-11 00:14:37.839440 | orchestrator | 2025-03-11 00:14:37.842291 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-11 00:14:37.843180 | orchestrator | 2025-03-11 00:14:37.844018 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-11 00:14:37.846112 | orchestrator | Tuesday 11 March 2025 00:14:37 +0000 (0:00:00.272) 0:00:00.272 ********* 2025-03-11 00:14:37.992247 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:38.085005 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:38.170068 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:38.250082 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:38.338316 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:38.618331 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:38.622479 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:38.622535 | orchestrator | 2025-03-11 00:14:39.887450 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-11 00:14:39.887567 | orchestrator | Tuesday 11 March 2025 00:14:38 +0000 (0:00:00.776) 0:00:01.049 ********* 2025-03-11 00:14:39.887600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 00:14:39.888978 | orchestrator | 2025-03-11 00:14:39.891002 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-11 00:14:39.892635 | orchestrator | Tuesday 11 March 2025 00:14:39 +0000 (0:00:01.272) 0:00:02.321 ********* 2025-03-11 00:14:41.581427 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:41.583596 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:41.585056 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:41.585828 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:41.587454 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:41.588823 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:41.590438 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:41.590854 | orchestrator | 2025-03-11 00:14:41.592104 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-11 00:14:41.593767 | orchestrator | Tuesday 11 March 2025 00:14:41 +0000 (0:00:01.691) 0:00:04.013 ********* 2025-03-11 00:14:43.356309 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:43.356583 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:43.363991 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:43.365458 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:43.370466 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:43.370980 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:43.371505 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:43.371934 | orchestrator | 2025-03-11 00:14:43.372535 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-11 00:14:43.372837 | orchestrator | Tuesday 11 March 2025 00:14:43 +0000 (0:00:01.773) 0:00:05.786 ********* 2025-03-11 00:14:43.929324 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-11 00:14:43.931199 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-11 00:14:44.601355 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-11 00:14:44.601479 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-11 00:14:44.601636 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-11 00:14:44.601809 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-11 00:14:44.602087 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-11 00:14:44.602123 | orchestrator | 2025-03-11 00:14:44.602430 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-11 00:14:44.602913 | orchestrator | Tuesday 11 March 2025 00:14:44 +0000 (0:00:01.248) 0:00:07.035 ********* 2025-03-11 00:14:46.857076 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 00:14:46.859991 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 00:14:46.862709 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-11 00:14:46.862988 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-11 00:14:46.863904 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-11 00:14:46.863978 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-11 00:14:46.864588 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-11 00:14:46.865031 | orchestrator | 2025-03-11 00:14:46.865978 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-11 00:14:46.866460 | orchestrator | Tuesday 11 March 2025 00:14:46 +0000 (0:00:02.257) 0:00:09.293 ********* 2025-03-11 00:14:48.582633 | orchestrator | changed: [testbed-manager] 2025-03-11 00:14:48.582960 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:48.582991 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:48.583004 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:48.583025 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:48.583477 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:48.584032 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:48.587378 | orchestrator | 2025-03-11 00:14:49.159340 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-11 00:14:49.159455 | orchestrator | Tuesday 11 March 2025 00:14:48 +0000 (0:00:01.722) 0:00:11.015 ********* 2025-03-11 00:14:49.159490 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 00:14:49.250927 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 00:14:49.710865 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-11 00:14:49.711049 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-11 00:14:49.712577 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-11 00:14:49.713576 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-11 00:14:49.714429 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-11 00:14:49.715647 | orchestrator | 2025-03-11 00:14:49.715953 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-11 00:14:49.716626 | orchestrator | Tuesday 11 March 2025 00:14:49 +0000 (0:00:01.134) 0:00:12.149 ********* 2025-03-11 00:14:50.215368 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:50.309493 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:50.936732 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:50.937106 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:50.937199 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:50.937998 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:50.938341 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:50.938608 | orchestrator | 2025-03-11 00:14:50.939241 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-11 00:14:50.939812 | orchestrator | Tuesday 11 March 2025 00:14:50 +0000 (0:00:01.219) 0:00:13.369 ********* 2025-03-11 00:14:51.117823 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:14:51.211995 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:14:51.295332 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:14:51.381777 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:14:51.483493 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:14:51.851206 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:14:51.851928 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:14:51.852074 | orchestrator | 2025-03-11 00:14:51.852161 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-11 00:14:51.852906 | orchestrator | Tuesday 11 March 2025 00:14:51 +0000 (0:00:00.916) 0:00:14.285 ********* 2025-03-11 00:14:53.959988 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:53.960797 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:14:53.964341 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:14:53.964503 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:14:53.964533 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:14:53.965180 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:14:53.965863 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:14:53.966363 | orchestrator | 2025-03-11 00:14:53.966853 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-11 00:14:53.967401 | orchestrator | Tuesday 11 March 2025 00:14:53 +0000 (0:00:02.112) 0:00:16.398 ********* 2025-03-11 00:14:55.960878 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-11 00:14:55.961512 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:55.962883 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:55.963090 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:55.967423 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:57.548264 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:57.548383 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:57.548403 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-11 00:14:57.548420 | orchestrator | 2025-03-11 00:14:57.548510 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-11 00:14:57.548527 | orchestrator | Tuesday 11 March 2025 00:14:55 +0000 (0:00:01.996) 0:00:18.395 ********* 2025-03-11 00:14:57.548559 | orchestrator | ok: [testbed-manager] 2025-03-11 00:14:57.548631 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:14:57.548653 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:14:57.549039 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:14:57.549900 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:14:57.549969 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:14:57.550453 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:14:57.553143 | orchestrator | 2025-03-11 00:14:59.095457 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-11 00:14:59.095619 | orchestrator | Tuesday 11 March 2025 00:14:57 +0000 (0:00:01.590) 0:00:19.985 ********* 2025-03-11 00:14:59.095711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 00:14:59.096270 | orchestrator | 2025-03-11 00:14:59.096492 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-11 00:14:59.097539 | orchestrator | Tuesday 11 March 2025 00:14:59 +0000 (0:00:01.544) 0:00:21.529 ********* 2025-03-11 00:15:00.144367 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:00.145245 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:15:00.148097 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:15:00.149099 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:15:00.149133 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:15:00.149785 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:15:00.150644 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:15:00.150825 | orchestrator | 2025-03-11 00:15:00.151558 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-11 00:15:00.151964 | orchestrator | Tuesday 11 March 2025 00:15:00 +0000 (0:00:01.051) 0:00:22.581 ********* 2025-03-11 00:15:00.318999 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:00.407432 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:15:00.714605 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:15:00.810297 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:15:00.905554 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:15:01.058841 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:15:01.059239 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:15:01.059306 | orchestrator | 2025-03-11 00:15:01.060091 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-11 00:15:01.060162 | orchestrator | Tuesday 11 March 2025 00:15:01 +0000 (0:00:00.913) 0:00:23.494 ********* 2025-03-11 00:15:01.536096 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:01.536335 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:01.537128 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:01.537164 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:01.631402 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:02.143401 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:02.143562 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:02.143633 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:02.143654 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:02.143963 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:02.144873 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:02.145224 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:02.146261 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-11 00:15:02.146402 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-11 00:15:02.147833 | orchestrator | 2025-03-11 00:15:02.148296 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-11 00:15:02.148327 | orchestrator | Tuesday 11 March 2025 00:15:02 +0000 (0:00:01.086) 0:00:24.581 ********* 2025-03-11 00:15:02.524773 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:15:02.632975 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:15:02.725142 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:15:02.822301 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:15:02.912650 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:15:04.283893 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:15:04.284045 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:15:04.284715 | orchestrator | 2025-03-11 00:15:04.285321 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-11 00:15:04.285996 | orchestrator | Tuesday 11 March 2025 00:15:04 +0000 (0:00:02.138) 0:00:26.720 ********* 2025-03-11 00:15:04.511763 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:15:04.609925 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:15:04.946483 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:15:05.056646 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:15:05.173161 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:15:05.216052 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:15:05.216593 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:15:05.216626 | orchestrator | 2025-03-11 00:15:05.217251 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:15:05.217713 | orchestrator | 2025-03-11 00:15:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:15:05.217791 | orchestrator | 2025-03-11 00:15:05 | INFO  | Please wait and do not abort execution. 2025-03-11 00:15:05.218854 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.219433 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.220061 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.220579 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.221151 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.221640 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.222445 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 00:15:05.222796 | orchestrator | 2025-03-11 00:15:05.222946 | orchestrator | Tuesday 11 March 2025 00:15:05 +0000 (0:00:00.935) 0:00:27.655 ********* 2025-03-11 00:15:05.223253 | orchestrator | =============================================================================== 2025-03-11 00:15:05.223681 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.26s 2025-03-11 00:15:05.224009 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 2.14s 2025-03-11 00:15:05.224421 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.11s 2025-03-11 00:15:05.224775 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.00s 2025-03-11 00:15:05.225043 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2025-03-11 00:15:05.225242 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2025-03-11 00:15:05.225583 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.69s 2025-03-11 00:15:05.226180 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.59s 2025-03-11 00:15:05.226854 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.54s 2025-03-11 00:15:05.227170 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-03-11 00:15:05.227361 | orchestrator | osism.commons.network : Create required directories --------------------- 1.25s 2025-03-11 00:15:05.227729 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2025-03-11 00:15:05.228025 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.13s 2025-03-11 00:15:05.228579 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.09s 2025-03-11 00:15:05.228894 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2025-03-11 00:15:05.229638 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.94s 2025-03-11 00:15:05.230104 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.92s 2025-03-11 00:15:05.230246 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.91s 2025-03-11 00:15:05.230835 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.78s 2025-03-11 00:15:05.923963 | orchestrator | + osism apply wireguard 2025-03-11 00:15:07.573218 | orchestrator | 2025-03-11 00:15:07 | INFO  | Task 0faced21-921b-468a-9b57-85576cfd6a64 (wireguard) was prepared for execution. 2025-03-11 00:15:11.113834 | orchestrator | 2025-03-11 00:15:07 | INFO  | It takes a moment until task 0faced21-921b-468a-9b57-85576cfd6a64 (wireguard) has been started and output is visible here. 2025-03-11 00:15:11.113997 | orchestrator | 2025-03-11 00:15:11.114394 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-11 00:15:11.114931 | orchestrator | 2025-03-11 00:15:11.115506 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-11 00:15:11.117614 | orchestrator | Tuesday 11 March 2025 00:15:11 +0000 (0:00:00.195) 0:00:00.195 ********* 2025-03-11 00:15:12.812897 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:12.813094 | orchestrator | 2025-03-11 00:15:12.813121 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-11 00:15:12.813870 | orchestrator | Tuesday 11 March 2025 00:15:12 +0000 (0:00:01.700) 0:00:01.895 ********* 2025-03-11 00:15:20.203150 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:20.205903 | orchestrator | 2025-03-11 00:15:20.205951 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-11 00:15:20.207630 | orchestrator | Tuesday 11 March 2025 00:15:20 +0000 (0:00:07.389) 0:00:09.285 ********* 2025-03-11 00:15:20.810000 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:20.814962 | orchestrator | 2025-03-11 00:15:20.815441 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-11 00:15:20.816252 | orchestrator | Tuesday 11 March 2025 00:15:20 +0000 (0:00:00.608) 0:00:09.893 ********* 2025-03-11 00:15:21.280862 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:21.281336 | orchestrator | 2025-03-11 00:15:21.282199 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-11 00:15:21.862379 | orchestrator | Tuesday 11 March 2025 00:15:21 +0000 (0:00:00.470) 0:00:10.364 ********* 2025-03-11 00:15:21.862536 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:21.862721 | orchestrator | 2025-03-11 00:15:21.862747 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-11 00:15:21.862788 | orchestrator | Tuesday 11 March 2025 00:15:21 +0000 (0:00:00.581) 0:00:10.946 ********* 2025-03-11 00:15:22.469770 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:22.470325 | orchestrator | 2025-03-11 00:15:22.470961 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-11 00:15:22.471834 | orchestrator | Tuesday 11 March 2025 00:15:22 +0000 (0:00:00.605) 0:00:11.551 ********* 2025-03-11 00:15:22.923199 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:22.923438 | orchestrator | 2025-03-11 00:15:22.924448 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-11 00:15:22.924858 | orchestrator | Tuesday 11 March 2025 00:15:22 +0000 (0:00:00.455) 0:00:12.007 ********* 2025-03-11 00:15:24.215982 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:24.216212 | orchestrator | 2025-03-11 00:15:24.216741 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-11 00:15:24.216777 | orchestrator | Tuesday 11 March 2025 00:15:24 +0000 (0:00:01.293) 0:00:13.301 ********* 2025-03-11 00:15:25.156925 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-11 00:15:25.158372 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:25.159496 | orchestrator | 2025-03-11 00:15:25.159911 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-11 00:15:25.160719 | orchestrator | Tuesday 11 March 2025 00:15:25 +0000 (0:00:00.940) 0:00:14.241 ********* 2025-03-11 00:15:27.155730 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:27.155946 | orchestrator | 2025-03-11 00:15:27.157799 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-11 00:15:27.159312 | orchestrator | Tuesday 11 March 2025 00:15:27 +0000 (0:00:01.996) 0:00:16.238 ********* 2025-03-11 00:15:28.123222 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:28.123721 | orchestrator | 2025-03-11 00:15:28.124537 | orchestrator | 2025-03-11 00:15:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:15:28.124635 | orchestrator | 2025-03-11 00:15:28 | INFO  | Please wait and do not abort execution. 2025-03-11 00:15:28.124690 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:15:28.128036 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:15:28.129053 | orchestrator | 2025-03-11 00:15:28.129541 | orchestrator | Tuesday 11 March 2025 00:15:28 +0000 (0:00:00.970) 0:00:17.208 ********* 2025-03-11 00:15:28.130715 | orchestrator | =============================================================================== 2025-03-11 00:15:28.131482 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.39s 2025-03-11 00:15:28.132389 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.00s 2025-03-11 00:15:28.133235 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.70s 2025-03-11 00:15:28.133708 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2025-03-11 00:15:28.134119 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-03-11 00:15:28.134563 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-03-11 00:15:28.135524 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2025-03-11 00:15:28.136328 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.61s 2025-03-11 00:15:28.137131 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.58s 2025-03-11 00:15:28.137724 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-03-11 00:15:28.138241 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-03-11 00:15:28.878581 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-11 00:15:28.919993 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-11 00:15:29.003505 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-11 00:15:29.003636 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 179 0 --:--:-- --:--:-- --:--:-- 180 2025-03-11 00:15:29.016303 | orchestrator | + osism apply --environment custom workarounds 2025-03-11 00:15:30.648187 | orchestrator | 2025-03-11 00:15:30 | INFO  | Trying to run play workarounds in environment custom 2025-03-11 00:15:30.712490 | orchestrator | 2025-03-11 00:15:30 | INFO  | Task 1c4c2e51-3bb3-48f4-8a8a-098bd161a9c5 (workarounds) was prepared for execution. 2025-03-11 00:15:34.336809 | orchestrator | 2025-03-11 00:15:30 | INFO  | It takes a moment until task 1c4c2e51-3bb3-48f4-8a8a-098bd161a9c5 (workarounds) has been started and output is visible here. 2025-03-11 00:15:34.336957 | orchestrator | 2025-03-11 00:15:34.341312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 00:15:34.342106 | orchestrator | 2025-03-11 00:15:34.342733 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-11 00:15:34.343572 | orchestrator | Tuesday 11 March 2025 00:15:34 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-03-11 00:15:34.528890 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-11 00:15:34.638228 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-11 00:15:34.730920 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-11 00:15:34.832916 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-11 00:15:34.933355 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-11 00:15:35.244290 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-11 00:15:35.245058 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-11 00:15:35.245939 | orchestrator | 2025-03-11 00:15:35.247710 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-11 00:15:35.248001 | orchestrator | 2025-03-11 00:15:35.248850 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-11 00:15:35.249126 | orchestrator | Tuesday 11 March 2025 00:15:35 +0000 (0:00:00.911) 0:00:01.084 ********* 2025-03-11 00:15:38.312713 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:38.312993 | orchestrator | 2025-03-11 00:15:38.313023 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-11 00:15:38.313039 | orchestrator | 2025-03-11 00:15:38.313061 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-11 00:15:40.219019 | orchestrator | Tuesday 11 March 2025 00:15:38 +0000 (0:00:03.033) 0:00:04.117 ********* 2025-03-11 00:15:40.219205 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:15:40.219278 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:15:40.219931 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:15:40.220976 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:15:40.221472 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:15:40.222435 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:15:40.222540 | orchestrator | 2025-03-11 00:15:40.223620 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-11 00:15:40.224168 | orchestrator | 2025-03-11 00:15:40.224739 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-11 00:15:40.225217 | orchestrator | Tuesday 11 March 2025 00:15:40 +0000 (0:00:01.937) 0:00:06.054 ********* 2025-03-11 00:15:41.847284 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.847538 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.848417 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.849034 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.850828 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.851740 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-11 00:15:41.855293 | orchestrator | 2025-03-11 00:15:45.568120 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-11 00:15:45.568353 | orchestrator | Tuesday 11 March 2025 00:15:41 +0000 (0:00:01.625) 0:00:07.680 ********* 2025-03-11 00:15:45.568398 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:15:45.568486 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:15:45.568823 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:15:45.568865 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:15:45.572137 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:15:45.751425 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:15:45.751527 | orchestrator | 2025-03-11 00:15:45.751546 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-11 00:15:45.751562 | orchestrator | Tuesday 11 March 2025 00:15:45 +0000 (0:00:03.724) 0:00:11.405 ********* 2025-03-11 00:15:45.751629 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:15:45.831132 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:15:45.927263 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:15:46.205060 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:15:46.383556 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:15:46.384145 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:15:46.385174 | orchestrator | 2025-03-11 00:15:46.391041 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-11 00:15:48.202282 | orchestrator | 2025-03-11 00:15:48.202434 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-11 00:15:48.202455 | orchestrator | Tuesday 11 March 2025 00:15:46 +0000 (0:00:00.815) 0:00:12.220 ********* 2025-03-11 00:15:48.202492 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:48.205475 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:15:48.206815 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:15:48.206842 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:15:48.206862 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:15:48.210415 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:15:48.211120 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:15:48.212330 | orchestrator | 2025-03-11 00:15:48.212561 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-11 00:15:48.213302 | orchestrator | Tuesday 11 March 2025 00:15:48 +0000 (0:00:01.816) 0:00:14.037 ********* 2025-03-11 00:15:49.890625 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:49.891868 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:15:49.891906 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:15:49.892221 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:15:49.892773 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:15:49.893192 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:15:49.893523 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:15:49.894524 | orchestrator | 2025-03-11 00:15:49.895853 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-11 00:15:49.896814 | orchestrator | Tuesday 11 March 2025 00:15:49 +0000 (0:00:01.687) 0:00:15.724 ********* 2025-03-11 00:15:51.463721 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:15:51.467238 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:15:51.467288 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:15:51.467397 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:15:51.467427 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:15:51.467443 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:51.467457 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:15:51.467476 | orchestrator | 2025-03-11 00:15:51.469394 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-11 00:15:51.470006 | orchestrator | Tuesday 11 March 2025 00:15:51 +0000 (0:00:01.576) 0:00:17.300 ********* 2025-03-11 00:15:53.350762 | orchestrator | changed: [testbed-manager] 2025-03-11 00:15:53.350972 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:15:53.351573 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:15:53.351692 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:15:53.351731 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:15:53.352281 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:15:53.352348 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:15:53.352410 | orchestrator | 2025-03-11 00:15:53.357811 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-11 00:15:53.510093 | orchestrator | Tuesday 11 March 2025 00:15:53 +0000 (0:00:01.888) 0:00:19.188 ********* 2025-03-11 00:15:53.510194 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:15:53.599060 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:15:53.700270 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:15:53.785972 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:15:54.064074 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:15:54.200239 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:15:54.201834 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:15:54.203247 | orchestrator | 2025-03-11 00:15:54.206350 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-11 00:15:54.206878 | orchestrator | 2025-03-11 00:15:54.208043 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-11 00:15:54.208073 | orchestrator | Tuesday 11 March 2025 00:15:54 +0000 (0:00:00.849) 0:00:20.038 ********* 2025-03-11 00:15:57.168101 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:15:57.168289 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:15:57.168317 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:15:57.168766 | orchestrator | ok: [testbed-manager] 2025-03-11 00:15:57.169869 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:15:57.170868 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:15:57.171209 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:15:57.172055 | orchestrator | 2025-03-11 00:15:57.174451 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:15:57.175138 | orchestrator | 2025-03-11 00:15:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:15:57.176527 | orchestrator | 2025-03-11 00:15:57 | INFO  | Please wait and do not abort execution. 2025-03-11 00:15:57.176560 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:15:57.177201 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.178403 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.178783 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.179857 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.181999 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.183068 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:15:57.184158 | orchestrator | 2025-03-11 00:15:57.184906 | orchestrator | Tuesday 11 March 2025 00:15:57 +0000 (0:00:02.966) 0:00:23.005 ********* 2025-03-11 00:15:57.185838 | orchestrator | =============================================================================== 2025-03-11 00:15:57.186835 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2025-03-11 00:15:57.188141 | orchestrator | Apply netplan configuration --------------------------------------------- 3.03s 2025-03-11 00:15:57.188925 | orchestrator | Install python3-docker -------------------------------------------------- 2.97s 2025-03-11 00:15:57.189666 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2025-03-11 00:15:57.190436 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2025-03-11 00:15:57.191224 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.82s 2025-03-11 00:15:57.192877 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2025-03-11 00:15:57.193599 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.63s 2025-03-11 00:15:57.194242 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.58s 2025-03-11 00:15:57.194741 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.91s 2025-03-11 00:15:57.195750 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.85s 2025-03-11 00:15:57.196210 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.82s 2025-03-11 00:15:57.839218 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-11 00:15:59.479917 | orchestrator | 2025-03-11 00:15:59 | INFO  | Task cd14cf39-a3f7-47fd-996f-f4663fb863e2 (reboot) was prepared for execution. 2025-03-11 00:16:03.153339 | orchestrator | 2025-03-11 00:15:59 | INFO  | It takes a moment until task cd14cf39-a3f7-47fd-996f-f4663fb863e2 (reboot) has been started and output is visible here. 2025-03-11 00:16:03.153519 | orchestrator | 2025-03-11 00:16:03.155615 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:03.268152 | orchestrator | 2025-03-11 00:16:03.268248 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:03.268266 | orchestrator | Tuesday 11 March 2025 00:16:03 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-03-11 00:16:03.268295 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:16:04.235781 | orchestrator | 2025-03-11 00:16:04.235911 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:04.235928 | orchestrator | Tuesday 11 March 2025 00:16:03 +0000 (0:00:00.118) 0:00:00.287 ********* 2025-03-11 00:16:04.235962 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:16:04.236762 | orchestrator | 2025-03-11 00:16:04.237564 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:04.238289 | orchestrator | Tuesday 11 March 2025 00:16:04 +0000 (0:00:00.966) 0:00:01.253 ********* 2025-03-11 00:16:04.354492 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:16:04.354912 | orchestrator | 2025-03-11 00:16:04.354949 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:04.355739 | orchestrator | 2025-03-11 00:16:04.356144 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:04.356451 | orchestrator | Tuesday 11 March 2025 00:16:04 +0000 (0:00:00.120) 0:00:01.374 ********* 2025-03-11 00:16:04.459333 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:16:04.459611 | orchestrator | 2025-03-11 00:16:04.460858 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:05.102562 | orchestrator | Tuesday 11 March 2025 00:16:04 +0000 (0:00:00.106) 0:00:01.480 ********* 2025-03-11 00:16:05.102812 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:16:05.102911 | orchestrator | 2025-03-11 00:16:05.215135 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:05.215227 | orchestrator | Tuesday 11 March 2025 00:16:05 +0000 (0:00:00.643) 0:00:02.124 ********* 2025-03-11 00:16:05.215256 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:16:05.215323 | orchestrator | 2025-03-11 00:16:05.216128 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:05.216819 | orchestrator | 2025-03-11 00:16:05.217470 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:05.218192 | orchestrator | Tuesday 11 March 2025 00:16:05 +0000 (0:00:00.108) 0:00:02.232 ********* 2025-03-11 00:16:05.313608 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:16:05.313949 | orchestrator | 2025-03-11 00:16:05.314710 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:05.316040 | orchestrator | Tuesday 11 March 2025 00:16:05 +0000 (0:00:00.100) 0:00:02.333 ********* 2025-03-11 00:16:06.117677 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:16:06.117869 | orchestrator | 2025-03-11 00:16:06.118356 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:06.118608 | orchestrator | Tuesday 11 March 2025 00:16:06 +0000 (0:00:00.804) 0:00:03.137 ********* 2025-03-11 00:16:06.247734 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:16:06.248887 | orchestrator | 2025-03-11 00:16:06.249679 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:06.250613 | orchestrator | 2025-03-11 00:16:06.251005 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:06.251316 | orchestrator | Tuesday 11 March 2025 00:16:06 +0000 (0:00:00.126) 0:00:03.264 ********* 2025-03-11 00:16:06.351909 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:16:06.352030 | orchestrator | 2025-03-11 00:16:06.352048 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:06.352064 | orchestrator | Tuesday 11 March 2025 00:16:06 +0000 (0:00:00.107) 0:00:03.372 ********* 2025-03-11 00:16:07.001826 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:16:07.002378 | orchestrator | 2025-03-11 00:16:07.002421 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:07.003084 | orchestrator | Tuesday 11 March 2025 00:16:07 +0000 (0:00:00.648) 0:00:04.021 ********* 2025-03-11 00:16:07.113267 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:16:07.113798 | orchestrator | 2025-03-11 00:16:07.113837 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:07.114282 | orchestrator | 2025-03-11 00:16:07.115063 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:07.116195 | orchestrator | Tuesday 11 March 2025 00:16:07 +0000 (0:00:00.110) 0:00:04.131 ********* 2025-03-11 00:16:07.206013 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:16:07.206175 | orchestrator | 2025-03-11 00:16:07.207869 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:07.208790 | orchestrator | Tuesday 11 March 2025 00:16:07 +0000 (0:00:00.094) 0:00:04.225 ********* 2025-03-11 00:16:07.875351 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:16:07.875545 | orchestrator | 2025-03-11 00:16:07.876590 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:07.877061 | orchestrator | Tuesday 11 March 2025 00:16:07 +0000 (0:00:00.670) 0:00:04.896 ********* 2025-03-11 00:16:07.987152 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:16:07.987338 | orchestrator | 2025-03-11 00:16:07.988076 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-11 00:16:07.989196 | orchestrator | 2025-03-11 00:16:07.989611 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-11 00:16:07.990187 | orchestrator | Tuesday 11 March 2025 00:16:07 +0000 (0:00:00.109) 0:00:05.006 ********* 2025-03-11 00:16:08.116601 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:16:08.117618 | orchestrator | 2025-03-11 00:16:08.118311 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-11 00:16:08.118941 | orchestrator | Tuesday 11 March 2025 00:16:08 +0000 (0:00:00.130) 0:00:05.137 ********* 2025-03-11 00:16:08.770040 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:16:08.770198 | orchestrator | 2025-03-11 00:16:08.770815 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-11 00:16:08.771320 | orchestrator | Tuesday 11 March 2025 00:16:08 +0000 (0:00:00.652) 0:00:05.789 ********* 2025-03-11 00:16:08.797711 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:16:08.798291 | orchestrator | 2025-03-11 00:16:08.798967 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:16:08.799355 | orchestrator | 2025-03-11 00:16:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:16:08.799935 | orchestrator | 2025-03-11 00:16:08 | INFO  | Please wait and do not abort execution. 2025-03-11 00:16:08.800596 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.801410 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.802126 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.802541 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.802883 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.803269 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:16:08.803755 | orchestrator | 2025-03-11 00:16:08.804032 | orchestrator | Tuesday 11 March 2025 00:16:08 +0000 (0:00:00.029) 0:00:05.819 ********* 2025-03-11 00:16:08.804413 | orchestrator | =============================================================================== 2025-03-11 00:16:08.805061 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2025-03-11 00:16:08.805775 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-03-11 00:16:08.806132 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-03-11 00:16:09.433442 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-11 00:16:11.050515 | orchestrator | 2025-03-11 00:16:11 | INFO  | Task e3ed1ca5-5c90-407e-882c-6458c468adfd (wait-for-connection) was prepared for execution. 2025-03-11 00:16:14.543065 | orchestrator | 2025-03-11 00:16:11 | INFO  | It takes a moment until task e3ed1ca5-5c90-407e-882c-6458c468adfd (wait-for-connection) has been started and output is visible here. 2025-03-11 00:16:14.543218 | orchestrator | 2025-03-11 00:16:14.543298 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-11 00:16:14.543354 | orchestrator | 2025-03-11 00:16:14.543415 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-11 00:16:14.545488 | orchestrator | Tuesday 11 March 2025 00:16:14 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-03-11 00:16:27.646868 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:16:27.647763 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:16:27.647807 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:16:27.647824 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:16:27.647847 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:16:27.651199 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:16:27.651233 | orchestrator | 2025-03-11 00:16:27.651330 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:16:27.651845 | orchestrator | 2025-03-11 00:16:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:16:27.652097 | orchestrator | 2025-03-11 00:16:27 | INFO  | Please wait and do not abort execution. 2025-03-11 00:16:27.653854 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.655067 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.655645 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.656739 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.657149 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.659540 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:27.660359 | orchestrator | 2025-03-11 00:16:27.662328 | orchestrator | Tuesday 11 March 2025 00:16:27 +0000 (0:00:13.093) 0:00:13.306 ********* 2025-03-11 00:16:27.663056 | orchestrator | =============================================================================== 2025-03-11 00:16:27.663761 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.10s 2025-03-11 00:16:28.323986 | orchestrator | + osism apply hddtemp 2025-03-11 00:16:30.478348 | orchestrator | 2025-03-11 00:16:30 | INFO  | Task f754946a-e739-4925-b0f7-e246a8663fb0 (hddtemp) was prepared for execution. 2025-03-11 00:16:33.989996 | orchestrator | 2025-03-11 00:16:30 | INFO  | It takes a moment until task f754946a-e739-4925-b0f7-e246a8663fb0 (hddtemp) has been started and output is visible here. 2025-03-11 00:16:33.990235 | orchestrator | 2025-03-11 00:16:33.990320 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-11 00:16:33.990340 | orchestrator | 2025-03-11 00:16:33.990380 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-11 00:16:33.990666 | orchestrator | Tuesday 11 March 2025 00:16:33 +0000 (0:00:00.231) 0:00:00.231 ********* 2025-03-11 00:16:34.171401 | orchestrator | ok: [testbed-manager] 2025-03-11 00:16:34.250294 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:16:34.332168 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:16:34.413380 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:16:34.497292 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:16:34.745277 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:16:34.745900 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:16:34.747911 | orchestrator | 2025-03-11 00:16:34.749579 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-11 00:16:34.750083 | orchestrator | Tuesday 11 March 2025 00:16:34 +0000 (0:00:00.756) 0:00:00.987 ********* 2025-03-11 00:16:36.080389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 00:16:36.083863 | orchestrator | 2025-03-11 00:16:36.083903 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-11 00:16:36.084870 | orchestrator | Tuesday 11 March 2025 00:16:36 +0000 (0:00:01.330) 0:00:02.317 ********* 2025-03-11 00:16:38.311321 | orchestrator | ok: [testbed-manager] 2025-03-11 00:16:38.311783 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:16:38.312358 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:16:38.312466 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:16:38.312644 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:16:38.313222 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:16:38.314511 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:16:38.314750 | orchestrator | 2025-03-11 00:16:38.315860 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-11 00:16:38.316260 | orchestrator | Tuesday 11 March 2025 00:16:38 +0000 (0:00:02.219) 0:00:04.537 ********* 2025-03-11 00:16:39.130347 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:16:39.262819 | orchestrator | changed: [testbed-manager] 2025-03-11 00:16:39.374851 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:16:39.866234 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:16:39.866801 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:16:39.866840 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:16:39.867832 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:16:39.868731 | orchestrator | 2025-03-11 00:16:39.869193 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-11 00:16:39.869808 | orchestrator | Tuesday 11 March 2025 00:16:39 +0000 (0:00:01.569) 0:00:06.107 ********* 2025-03-11 00:16:41.387884 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:16:41.389136 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:16:41.392875 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:16:41.392960 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:16:41.392980 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:16:41.394253 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:16:41.394477 | orchestrator | ok: [testbed-manager] 2025-03-11 00:16:41.394654 | orchestrator | 2025-03-11 00:16:41.394946 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-11 00:16:41.395173 | orchestrator | Tuesday 11 March 2025 00:16:41 +0000 (0:00:01.519) 0:00:07.626 ********* 2025-03-11 00:16:41.684350 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:16:41.775372 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:16:41.883190 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:16:41.976441 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:16:42.128497 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:16:42.129731 | orchestrator | changed: [testbed-manager] 2025-03-11 00:16:42.131488 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:16:42.131960 | orchestrator | 2025-03-11 00:16:42.132871 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-11 00:16:42.133795 | orchestrator | Tuesday 11 March 2025 00:16:42 +0000 (0:00:00.747) 0:00:08.373 ********* 2025-03-11 00:16:56.444763 | orchestrator | changed: [testbed-manager] 2025-03-11 00:16:56.445882 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:16:56.445929 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:16:56.447426 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:16:56.447461 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:16:56.450487 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:16:56.451589 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:16:56.452322 | orchestrator | 2025-03-11 00:16:56.453530 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-11 00:16:56.454147 | orchestrator | Tuesday 11 March 2025 00:16:56 +0000 (0:00:14.305) 0:00:22.679 ********* 2025-03-11 00:16:57.722207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 00:16:57.722420 | orchestrator | 2025-03-11 00:16:57.726299 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-11 00:16:57.727474 | orchestrator | Tuesday 11 March 2025 00:16:57 +0000 (0:00:01.281) 0:00:23.961 ********* 2025-03-11 00:16:59.890391 | orchestrator | changed: [testbed-node-0] 2025-03-11 00:16:59.892059 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:16:59.894377 | orchestrator | changed: [testbed-node-2] 2025-03-11 00:16:59.894415 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:16:59.897430 | orchestrator | changed: [testbed-node-1] 2025-03-11 00:16:59.899982 | orchestrator | changed: [testbed-manager] 2025-03-11 00:16:59.900008 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:16:59.900027 | orchestrator | 2025-03-11 00:16:59.903827 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:16:59.904491 | orchestrator | 2025-03-11 00:16:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:16:59.904835 | orchestrator | 2025-03-11 00:16:59 | INFO  | Please wait and do not abort execution. 2025-03-11 00:16:59.906729 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:16:59.907402 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.908463 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.909771 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.910387 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.911660 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.912635 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:16:59.914073 | orchestrator | 2025-03-11 00:16:59.914917 | orchestrator | Tuesday 11 March 2025 00:16:59 +0000 (0:00:02.173) 0:00:26.134 ********* 2025-03-11 00:16:59.915978 | orchestrator | =============================================================================== 2025-03-11 00:16:59.916929 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.31s 2025-03-11 00:16:59.917906 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.22s 2025-03-11 00:16:59.919202 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.17s 2025-03-11 00:16:59.920071 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.57s 2025-03-11 00:16:59.920744 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.52s 2025-03-11 00:16:59.921709 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2025-03-11 00:16:59.922490 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2025-03-11 00:16:59.923054 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.76s 2025-03-11 00:16:59.923727 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.75s 2025-03-11 00:17:00.731925 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-11 00:17:02.075585 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-11 00:17:02.075848 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-11 00:17:02.075876 | orchestrator | + local max_attempts=60 2025-03-11 00:17:02.075892 | orchestrator | + local name=ceph-ansible 2025-03-11 00:17:02.075907 | orchestrator | + local attempt_num=1 2025-03-11 00:17:02.075928 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-11 00:17:02.104417 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:17:02.105213 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-11 00:17:02.105270 | orchestrator | + local max_attempts=60 2025-03-11 00:17:02.105287 | orchestrator | + local name=kolla-ansible 2025-03-11 00:17:02.105304 | orchestrator | + local attempt_num=1 2025-03-11 00:17:02.105326 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-11 00:17:02.134240 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:17:02.160249 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-11 00:17:02.160291 | orchestrator | + local max_attempts=60 2025-03-11 00:17:02.160308 | orchestrator | + local name=osism-ansible 2025-03-11 00:17:02.160324 | orchestrator | + local attempt_num=1 2025-03-11 00:17:02.160340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-11 00:17:02.160365 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-11 00:17:02.598218 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-11 00:17:02.598358 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-11 00:17:02.598396 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-11 00:17:03.010806 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-11 00:17:03.374890 | orchestrator | ARA in osism-ansible already disabled. 2025-03-11 00:17:03.696216 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-11 00:17:03.697636 | orchestrator | + osism apply gather-facts 2025-03-11 00:17:05.398710 | orchestrator | 2025-03-11 00:17:05 | INFO  | Task e60307d8-4e6c-4d3d-9467-d157ef33e3dd (gather-facts) was prepared for execution. 2025-03-11 00:17:09.056516 | orchestrator | 2025-03-11 00:17:05 | INFO  | It takes a moment until task e60307d8-4e6c-4d3d-9467-d157ef33e3dd (gather-facts) has been started and output is visible here. 2025-03-11 00:17:09.056747 | orchestrator | 2025-03-11 00:17:09.059070 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 00:17:09.061741 | orchestrator | 2025-03-11 00:17:14.309430 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 00:17:14.309649 | orchestrator | Tuesday 11 March 2025 00:17:09 +0000 (0:00:00.221) 0:00:00.221 ********* 2025-03-11 00:17:14.309694 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:17:14.313723 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:17:14.314716 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:17:14.314809 | orchestrator | ok: [testbed-manager] 2025-03-11 00:17:14.314833 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:17:14.314892 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:17:14.315919 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:17:14.316733 | orchestrator | 2025-03-11 00:17:14.317381 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-11 00:17:14.318109 | orchestrator | 2025-03-11 00:17:14.318692 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-11 00:17:14.319167 | orchestrator | Tuesday 11 March 2025 00:17:14 +0000 (0:00:05.250) 0:00:05.472 ********* 2025-03-11 00:17:14.488066 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:17:14.593584 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:17:14.688535 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:17:14.775054 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:17:14.859411 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:17:14.896815 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:17:14.897568 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:17:14.897641 | orchestrator | 2025-03-11 00:17:14.897837 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:17:14.898170 | orchestrator | 2025-03-11 00:17:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:17:14.899825 | orchestrator | 2025-03-11 00:17:14 | INFO  | Please wait and do not abort execution. 2025-03-11 00:17:14.899857 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.902418 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.904948 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.905587 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.907934 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.908020 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.908297 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-11 00:17:14.908579 | orchestrator | 2025-03-11 00:17:14.908632 | orchestrator | Tuesday 11 March 2025 00:17:14 +0000 (0:00:00.595) 0:00:06.067 ********* 2025-03-11 00:17:14.908782 | orchestrator | =============================================================================== 2025-03-11 00:17:14.909118 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.25s 2025-03-11 00:17:14.909259 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-03-11 00:17:15.695753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-11 00:17:15.708748 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-11 00:17:15.721578 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-11 00:17:15.733320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-11 00:17:15.745826 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-11 00:17:15.762086 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-11 00:17:15.779438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-11 00:17:15.794253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-11 00:17:15.809561 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-11 00:17:15.823078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-11 00:17:15.836633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-11 00:17:15.854558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-11 00:17:15.870093 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-11 00:17:15.885371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-11 00:17:15.898431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-11 00:17:15.913042 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-11 00:17:15.928858 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-11 00:17:15.945400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-11 00:17:15.960486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-11 00:17:15.973451 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-11 00:17:15.989049 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-11 00:17:16.242841 | orchestrator | changed 2025-03-11 00:17:16.309713 | 2025-03-11 00:17:16.309838 | TASK [Deploy services] 2025-03-11 00:17:16.421121 | orchestrator | skipping: Conditional result was False 2025-03-11 00:17:16.442106 | 2025-03-11 00:17:16.442297 | TASK [Deploy in a nutshell] 2025-03-11 00:17:17.147105 | orchestrator | + set -e 2025-03-11 00:17:17.148091 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-11 00:17:17.148136 | orchestrator | ++ export INTERACTIVE=false 2025-03-11 00:17:17.148156 | orchestrator | ++ INTERACTIVE=false 2025-03-11 00:17:17.148201 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-11 00:17:17.148221 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-11 00:17:17.148239 | orchestrator | + source /opt/manager-vars.sh 2025-03-11 00:17:17.148266 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-11 00:17:17.148292 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-11 00:17:17.148311 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-11 00:17:17.148326 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-11 00:17:17.148341 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-11 00:17:17.148355 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-11 00:17:17.148369 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-11 00:17:17.148383 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-11 00:17:17.148398 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-11 00:17:17.148412 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-11 00:17:17.148426 | orchestrator | ++ export ARA=false 2025-03-11 00:17:17.148440 | orchestrator | ++ ARA=false 2025-03-11 00:17:17.148454 | orchestrator | ++ export TEMPEST=false 2025-03-11 00:17:17.148467 | orchestrator | ++ TEMPEST=false 2025-03-11 00:17:17.148481 | orchestrator | ++ export IS_ZUUL=true 2025-03-11 00:17:17.148495 | orchestrator | ++ IS_ZUUL=true 2025-03-11 00:17:17.148509 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-11 00:17:17.148524 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.198 2025-03-11 00:17:17.148538 | orchestrator | ++ export EXTERNAL_API=false 2025-03-11 00:17:17.148553 | orchestrator | ++ EXTERNAL_API=false 2025-03-11 00:17:17.148566 | orchestrator | 2025-03-11 00:17:17.148580 | orchestrator | # PULL IMAGES 2025-03-11 00:17:17.148594 | orchestrator | 2025-03-11 00:17:17.148636 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-11 00:17:17.148650 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-11 00:17:17.148672 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-11 00:17:17.148686 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-11 00:17:17.148700 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-11 00:17:17.148714 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-11 00:17:17.148728 | orchestrator | + echo 2025-03-11 00:17:17.148742 | orchestrator | + echo '# PULL IMAGES' 2025-03-11 00:17:17.148756 | orchestrator | + echo 2025-03-11 00:17:17.148781 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-11 00:17:17.197772 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-11 00:17:18.956303 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-11 00:17:18.956432 | orchestrator | 2025-03-11 00:17:18 | INFO  | Trying to run play pull-images in environment custom 2025-03-11 00:17:19.024277 | orchestrator | 2025-03-11 00:17:19 | INFO  | Task ff8690bd-586a-43e3-9efa-6b14339ab49e (pull-images) was prepared for execution. 2025-03-11 00:17:22.873116 | orchestrator | 2025-03-11 00:17:19 | INFO  | It takes a moment until task ff8690bd-586a-43e3-9efa-6b14339ab49e (pull-images) has been started and output is visible here. 2025-03-11 00:17:22.873242 | orchestrator | 2025-03-11 00:17:22.873848 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-11 00:17:22.873873 | orchestrator | 2025-03-11 00:17:22.873898 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-11 00:17:22.873919 | orchestrator | Tuesday 11 March 2025 00:17:22 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-03-11 00:18:02.153644 | orchestrator | changed: [testbed-manager] 2025-03-11 00:19:02.573777 | orchestrator | 2025-03-11 00:19:02.573935 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-11 00:19:02.573960 | orchestrator | Tuesday 11 March 2025 00:18:02 +0000 (0:00:39.268) 0:00:39.431 ********* 2025-03-11 00:19:02.573994 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-11 00:19:02.574465 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-11 00:19:02.574501 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-11 00:19:02.574516 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-11 00:19:02.574546 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-11 00:19:02.574561 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-11 00:19:02.574607 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-11 00:19:02.574624 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-11 00:19:02.574666 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-03-11 00:19:02.574681 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-11 00:19:02.574707 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-11 00:19:02.579123 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-11 00:19:02.579387 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-11 00:19:02.579423 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-11 00:19:02.580061 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-11 00:19:02.581076 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-11 00:19:02.582195 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-11 00:19:02.582511 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-11 00:19:02.583396 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-11 00:19:02.583814 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-11 00:19:02.584155 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-11 00:19:02.584637 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-11 00:19:02.585558 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-11 00:19:02.586116 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-11 00:19:02.586144 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-11 00:19:02.586164 | orchestrator | 2025-03-11 00:19:02.586478 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:19:02.586933 | orchestrator | 2025-03-11 00:19:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:19:02.587384 | orchestrator | 2025-03-11 00:19:02 | INFO  | Please wait and do not abort execution. 2025-03-11 00:19:02.587414 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 00:19:02.587782 | orchestrator | 2025-03-11 00:19:02.588261 | orchestrator | Tuesday 11 March 2025 00:19:02 +0000 (0:01:00.430) 0:01:39.861 ********* 2025-03-11 00:19:02.588819 | orchestrator | =============================================================================== 2025-03-11 00:19:02.589352 | orchestrator | Pull other images ------------------------------------------------------ 60.43s 2025-03-11 00:19:02.589691 | orchestrator | Pull keystone image ---------------------------------------------------- 39.27s 2025-03-11 00:19:04.863837 | orchestrator | 2025-03-11 00:19:04 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-11 00:19:04.908427 | orchestrator | 2025-03-11 00:19:04 | INFO  | Task 9c25db47-1c4f-400f-814e-a39d2fa63f86 (wipe-partitions) was prepared for execution. 2025-03-11 00:19:08.404197 | orchestrator | 2025-03-11 00:19:04 | INFO  | It takes a moment until task 9c25db47-1c4f-400f-814e-a39d2fa63f86 (wipe-partitions) has been started and output is visible here. 2025-03-11 00:19:08.404330 | orchestrator | 2025-03-11 00:19:08.404489 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-11 00:19:08.404681 | orchestrator | 2025-03-11 00:19:08.404973 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-11 00:19:08.408428 | orchestrator | Tuesday 11 March 2025 00:19:08 +0000 (0:00:00.180) 0:00:00.180 ********* 2025-03-11 00:19:09.054806 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:19:09.055191 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:19:09.055234 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:19:09.055297 | orchestrator | 2025-03-11 00:19:09.060989 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-11 00:19:09.061598 | orchestrator | Tuesday 11 March 2025 00:19:09 +0000 (0:00:00.656) 0:00:00.836 ********* 2025-03-11 00:19:09.241818 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:09.365392 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:19:09.367462 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:19:09.367632 | orchestrator | 2025-03-11 00:19:09.367665 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-11 00:19:10.205800 | orchestrator | Tuesday 11 March 2025 00:19:09 +0000 (0:00:00.310) 0:00:01.147 ********* 2025-03-11 00:19:10.205968 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:10.206098 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:19:10.206432 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:19:10.206785 | orchestrator | 2025-03-11 00:19:10.209540 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-11 00:19:10.378995 | orchestrator | Tuesday 11 March 2025 00:19:10 +0000 (0:00:00.837) 0:00:01.985 ********* 2025-03-11 00:19:10.379054 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:10.493262 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:19:10.493809 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:19:10.493843 | orchestrator | 2025-03-11 00:19:10.493940 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-11 00:19:10.494319 | orchestrator | Tuesday 11 March 2025 00:19:10 +0000 (0:00:00.289) 0:00:02.274 ********* 2025-03-11 00:19:11.778556 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-11 00:19:11.778897 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-11 00:19:11.778929 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-11 00:19:11.778953 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-11 00:19:11.779124 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-11 00:19:11.779460 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-11 00:19:11.779944 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-11 00:19:11.780236 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-11 00:19:11.781898 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-11 00:19:11.782633 | orchestrator | 2025-03-11 00:19:11.782970 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-11 00:19:11.784073 | orchestrator | Tuesday 11 March 2025 00:19:11 +0000 (0:00:01.286) 0:00:03.560 ********* 2025-03-11 00:19:13.239009 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-11 00:19:13.240326 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-11 00:19:13.240978 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-11 00:19:13.241180 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-11 00:19:13.241342 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-11 00:19:13.241704 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-11 00:19:13.243168 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-11 00:19:13.243247 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-11 00:19:13.243682 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-11 00:19:13.243977 | orchestrator | 2025-03-11 00:19:13.244314 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-11 00:19:13.244693 | orchestrator | Tuesday 11 March 2025 00:19:13 +0000 (0:00:01.459) 0:00:05.020 ********* 2025-03-11 00:19:15.802298 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-11 00:19:15.805380 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-11 00:19:15.809387 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-11 00:19:15.812390 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-11 00:19:15.815591 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-11 00:19:15.816165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-11 00:19:15.816184 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-11 00:19:15.816446 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-11 00:19:15.816486 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-11 00:19:15.817039 | orchestrator | 2025-03-11 00:19:15.817596 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-11 00:19:15.818785 | orchestrator | Tuesday 11 March 2025 00:19:15 +0000 (0:00:02.560) 0:00:07.581 ********* 2025-03-11 00:19:16.425478 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:19:16.426509 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:19:16.426710 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:19:16.427903 | orchestrator | 2025-03-11 00:19:16.429037 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-11 00:19:17.078817 | orchestrator | Tuesday 11 March 2025 00:19:16 +0000 (0:00:00.627) 0:00:08.208 ********* 2025-03-11 00:19:17.078946 | orchestrator | changed: [testbed-node-3] 2025-03-11 00:19:17.082415 | orchestrator | changed: [testbed-node-4] 2025-03-11 00:19:17.082732 | orchestrator | changed: [testbed-node-5] 2025-03-11 00:19:17.083544 | orchestrator | 2025-03-11 00:19:17.084424 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:19:17.084525 | orchestrator | 2025-03-11 00:19:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:19:17.085172 | orchestrator | 2025-03-11 00:19:17 | INFO  | Please wait and do not abort execution. 2025-03-11 00:19:17.086109 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:17.086530 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:17.087277 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:17.087777 | orchestrator | 2025-03-11 00:19:17.089967 | orchestrator | Tuesday 11 March 2025 00:19:17 +0000 (0:00:00.651) 0:00:08.860 ********* 2025-03-11 00:19:17.090384 | orchestrator | =============================================================================== 2025-03-11 00:19:17.090717 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.56s 2025-03-11 00:19:17.090994 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2025-03-11 00:19:17.091887 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2025-03-11 00:19:17.092258 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.84s 2025-03-11 00:19:17.092283 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.66s 2025-03-11 00:19:17.092303 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-03-11 00:19:17.092793 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-03-11 00:19:17.093106 | orchestrator | Remove all rook related logical devices --------------------------------- 0.31s 2025-03-11 00:19:17.093434 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-03-11 00:19:20.243807 | orchestrator | 2025-03-11 00:19:20 | INFO  | Task 76d19162-a61a-4448-9fe8-24a804940c62 (facts) was prepared for execution. 2025-03-11 00:19:23.866112 | orchestrator | 2025-03-11 00:19:20 | INFO  | It takes a moment until task 76d19162-a61a-4448-9fe8-24a804940c62 (facts) has been started and output is visible here. 2025-03-11 00:19:23.866249 | orchestrator | 2025-03-11 00:19:23.867387 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-11 00:19:23.867423 | orchestrator | 2025-03-11 00:19:23.868486 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-11 00:19:23.871258 | orchestrator | Tuesday 11 March 2025 00:19:23 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-03-11 00:19:25.035544 | orchestrator | ok: [testbed-manager] 2025-03-11 00:19:25.036736 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:19:25.038627 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:19:25.039430 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:19:25.040486 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:25.042131 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:19:25.044745 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:19:25.045538 | orchestrator | 2025-03-11 00:19:25.046854 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-11 00:19:25.047459 | orchestrator | Tuesday 11 March 2025 00:19:25 +0000 (0:00:01.169) 0:00:01.443 ********* 2025-03-11 00:19:25.218072 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:19:25.311601 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:19:25.452937 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:19:25.571619 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:19:25.667847 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:26.584938 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:19:26.585088 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:19:26.585112 | orchestrator | 2025-03-11 00:19:26.585897 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 00:19:26.587124 | orchestrator | 2025-03-11 00:19:26.588387 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 00:19:26.589261 | orchestrator | Tuesday 11 March 2025 00:19:26 +0000 (0:00:01.548) 0:00:02.992 ********* 2025-03-11 00:19:31.528708 | orchestrator | ok: [testbed-node-1] 2025-03-11 00:19:31.528971 | orchestrator | ok: [testbed-node-2] 2025-03-11 00:19:31.529001 | orchestrator | ok: [testbed-node-0] 2025-03-11 00:19:31.529016 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:31.529037 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:19:31.529359 | orchestrator | ok: [testbed-manager] 2025-03-11 00:19:31.529776 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:19:31.530634 | orchestrator | 2025-03-11 00:19:31.530908 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-11 00:19:31.534818 | orchestrator | 2025-03-11 00:19:31.937634 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-11 00:19:31.937753 | orchestrator | Tuesday 11 March 2025 00:19:31 +0000 (0:00:04.949) 0:00:07.941 ********* 2025-03-11 00:19:31.937800 | orchestrator | skipping: [testbed-manager] 2025-03-11 00:19:32.031942 | orchestrator | skipping: [testbed-node-0] 2025-03-11 00:19:32.136415 | orchestrator | skipping: [testbed-node-1] 2025-03-11 00:19:32.220120 | orchestrator | skipping: [testbed-node-2] 2025-03-11 00:19:32.307201 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:32.353706 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:19:32.357519 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:19:32.357989 | orchestrator | 2025-03-11 00:19:32.360233 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:19:32.362357 | orchestrator | 2025-03-11 00:19:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:19:32.362397 | orchestrator | 2025-03-11 00:19:32 | INFO  | Please wait and do not abort execution. 2025-03-11 00:19:32.362420 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.364862 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.367290 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.370327 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.371976 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.372643 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.373454 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 00:19:32.374349 | orchestrator | 2025-03-11 00:19:32.374929 | orchestrator | Tuesday 11 March 2025 00:19:32 +0000 (0:00:00.823) 0:00:08.765 ********* 2025-03-11 00:19:32.375966 | orchestrator | =============================================================================== 2025-03-11 00:19:32.376840 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2025-03-11 00:19:32.377584 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.55s 2025-03-11 00:19:32.378662 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2025-03-11 00:19:32.379363 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.82s 2025-03-11 00:19:34.915602 | orchestrator | 2025-03-11 00:19:34 | INFO  | Task e20f0fb7-6ab6-4c00-88a2-dbfcfc36d0e2 (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-11 00:19:38.586169 | orchestrator | 2025-03-11 00:19:34 | INFO  | It takes a moment until task e20f0fb7-6ab6-4c00-88a2-dbfcfc36d0e2 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-11 00:19:38.586310 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-11 00:19:39.265115 | orchestrator | 2025-03-11 00:19:39.265466 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:19:39.265972 | orchestrator | 2025-03-11 00:19:39.267534 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:19:39.267785 | orchestrator | Tuesday 11 March 2025 00:19:39 +0000 (0:00:00.569) 0:00:00.569 ********* 2025-03-11 00:19:39.549940 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 00:19:39.550179 | orchestrator | 2025-03-11 00:19:39.550536 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:19:39.551290 | orchestrator | Tuesday 11 March 2025 00:19:39 +0000 (0:00:00.288) 0:00:00.858 ********* 2025-03-11 00:19:39.823031 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:39.823171 | orchestrator | 2025-03-11 00:19:40.414222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:40.414336 | orchestrator | Tuesday 11 March 2025 00:19:39 +0000 (0:00:00.265) 0:00:01.123 ********* 2025-03-11 00:19:40.414372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-11 00:19:40.415615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-11 00:19:40.416246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-11 00:19:40.417983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-11 00:19:40.420528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-11 00:19:40.422054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-11 00:19:40.422336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-11 00:19:40.422715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-11 00:19:40.423011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-11 00:19:40.424234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-11 00:19:40.424562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-11 00:19:40.424824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-11 00:19:40.425604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-11 00:19:40.425857 | orchestrator | 2025-03-11 00:19:40.426692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:40.427117 | orchestrator | Tuesday 11 March 2025 00:19:40 +0000 (0:00:00.600) 0:00:01.724 ********* 2025-03-11 00:19:40.650237 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:40.852861 | orchestrator | 2025-03-11 00:19:40.853048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:40.853081 | orchestrator | Tuesday 11 March 2025 00:19:40 +0000 (0:00:00.231) 0:00:01.956 ********* 2025-03-11 00:19:40.853136 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:40.853226 | orchestrator | 2025-03-11 00:19:40.853252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:40.853282 | orchestrator | Tuesday 11 March 2025 00:19:40 +0000 (0:00:00.204) 0:00:02.160 ********* 2025-03-11 00:19:41.122307 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:41.122804 | orchestrator | 2025-03-11 00:19:41.122844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:41.123045 | orchestrator | Tuesday 11 March 2025 00:19:41 +0000 (0:00:00.269) 0:00:02.430 ********* 2025-03-11 00:19:41.345605 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:41.347272 | orchestrator | 2025-03-11 00:19:41.347581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:41.566949 | orchestrator | Tuesday 11 March 2025 00:19:41 +0000 (0:00:00.219) 0:00:02.650 ********* 2025-03-11 00:19:41.567067 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:41.568439 | orchestrator | 2025-03-11 00:19:41.569931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:41.573032 | orchestrator | Tuesday 11 March 2025 00:19:41 +0000 (0:00:00.226) 0:00:02.876 ********* 2025-03-11 00:19:41.775953 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:41.777919 | orchestrator | 2025-03-11 00:19:41.778748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:41.778783 | orchestrator | Tuesday 11 March 2025 00:19:41 +0000 (0:00:00.209) 0:00:03.085 ********* 2025-03-11 00:19:41.992824 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:41.994257 | orchestrator | 2025-03-11 00:19:41.995957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:41.998296 | orchestrator | Tuesday 11 March 2025 00:19:41 +0000 (0:00:00.215) 0:00:03.301 ********* 2025-03-11 00:19:42.219863 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:42.223103 | orchestrator | 2025-03-11 00:19:42.223136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:42.223158 | orchestrator | Tuesday 11 March 2025 00:19:42 +0000 (0:00:00.227) 0:00:03.529 ********* 2025-03-11 00:19:43.006825 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_194aeec8-a038-4d98-ad9f-169d629e88aa) 2025-03-11 00:19:43.007704 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_194aeec8-a038-4d98-ad9f-169d629e88aa) 2025-03-11 00:19:43.008645 | orchestrator | 2025-03-11 00:19:43.010518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:44.178114 | orchestrator | Tuesday 11 March 2025 00:19:43 +0000 (0:00:00.786) 0:00:04.315 ********* 2025-03-11 00:19:44.178258 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed3d5c7a-4300-47cf-88fa-db7e232461c4) 2025-03-11 00:19:44.178844 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed3d5c7a-4300-47cf-88fa-db7e232461c4) 2025-03-11 00:19:44.179020 | orchestrator | 2025-03-11 00:19:44.179052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:44.179125 | orchestrator | Tuesday 11 March 2025 00:19:44 +0000 (0:00:01.169) 0:00:05.485 ********* 2025-03-11 00:19:44.917391 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ef8fa17c-1885-4415-b267-a55d447b75a1) 2025-03-11 00:19:44.918730 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ef8fa17c-1885-4415-b267-a55d447b75a1) 2025-03-11 00:19:44.920061 | orchestrator | 2025-03-11 00:19:44.921940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:44.923086 | orchestrator | Tuesday 11 March 2025 00:19:44 +0000 (0:00:00.740) 0:00:06.225 ********* 2025-03-11 00:19:45.506574 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4a50e71d-fbe2-4470-bd50-185934b47889) 2025-03-11 00:19:45.516302 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4a50e71d-fbe2-4470-bd50-185934b47889) 2025-03-11 00:19:45.518455 | orchestrator | 2025-03-11 00:19:45.519125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:45.520941 | orchestrator | Tuesday 11 March 2025 00:19:45 +0000 (0:00:00.589) 0:00:06.815 ********* 2025-03-11 00:19:45.998088 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:19:45.998440 | orchestrator | 2025-03-11 00:19:45.999375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:46.002979 | orchestrator | Tuesday 11 March 2025 00:19:45 +0000 (0:00:00.488) 0:00:07.303 ********* 2025-03-11 00:19:46.698645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-11 00:19:46.699347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-11 00:19:46.703107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-11 00:19:46.704282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-11 00:19:46.704314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-11 00:19:46.704670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-11 00:19:46.708343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-11 00:19:46.708440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-11 00:19:46.708459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-11 00:19:46.708477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-11 00:19:46.716152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-11 00:19:46.968879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-11 00:19:46.968965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-11 00:19:46.968982 | orchestrator | 2025-03-11 00:19:46.968997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:46.969019 | orchestrator | Tuesday 11 March 2025 00:19:46 +0000 (0:00:00.703) 0:00:08.007 ********* 2025-03-11 00:19:46.969046 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:46.969380 | orchestrator | 2025-03-11 00:19:46.971462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:46.971490 | orchestrator | Tuesday 11 March 2025 00:19:46 +0000 (0:00:00.268) 0:00:08.275 ********* 2025-03-11 00:19:47.293017 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:47.293181 | orchestrator | 2025-03-11 00:19:47.293238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:47.293786 | orchestrator | Tuesday 11 March 2025 00:19:47 +0000 (0:00:00.326) 0:00:08.601 ********* 2025-03-11 00:19:47.613597 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:47.614521 | orchestrator | 2025-03-11 00:19:47.614752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:47.615511 | orchestrator | Tuesday 11 March 2025 00:19:47 +0000 (0:00:00.316) 0:00:08.918 ********* 2025-03-11 00:19:47.922117 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:47.922316 | orchestrator | 2025-03-11 00:19:47.923947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:47.924737 | orchestrator | Tuesday 11 March 2025 00:19:47 +0000 (0:00:00.311) 0:00:09.229 ********* 2025-03-11 00:19:48.976895 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:48.977951 | orchestrator | 2025-03-11 00:19:48.977988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:48.978217 | orchestrator | Tuesday 11 March 2025 00:19:48 +0000 (0:00:01.053) 0:00:10.283 ********* 2025-03-11 00:19:49.333868 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:49.334052 | orchestrator | 2025-03-11 00:19:49.335033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:49.335358 | orchestrator | Tuesday 11 March 2025 00:19:49 +0000 (0:00:00.356) 0:00:10.640 ********* 2025-03-11 00:19:49.610574 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:49.611370 | orchestrator | 2025-03-11 00:19:49.611394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:49.611413 | orchestrator | Tuesday 11 March 2025 00:19:49 +0000 (0:00:00.278) 0:00:10.919 ********* 2025-03-11 00:19:49.871430 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:49.871968 | orchestrator | 2025-03-11 00:19:49.872005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:49.872028 | orchestrator | Tuesday 11 March 2025 00:19:49 +0000 (0:00:00.257) 0:00:11.177 ********* 2025-03-11 00:19:50.734332 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-11 00:19:50.734532 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-11 00:19:50.734616 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-11 00:19:50.736374 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-11 00:19:50.736640 | orchestrator | 2025-03-11 00:19:50.736892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:50.737520 | orchestrator | Tuesday 11 March 2025 00:19:50 +0000 (0:00:00.864) 0:00:12.041 ********* 2025-03-11 00:19:50.981365 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:50.983136 | orchestrator | 2025-03-11 00:19:51.329503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:51.329652 | orchestrator | Tuesday 11 March 2025 00:19:50 +0000 (0:00:00.249) 0:00:12.291 ********* 2025-03-11 00:19:51.329682 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:51.334051 | orchestrator | 2025-03-11 00:19:51.334394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:51.337072 | orchestrator | Tuesday 11 March 2025 00:19:51 +0000 (0:00:00.347) 0:00:12.639 ********* 2025-03-11 00:19:51.736493 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:51.737388 | orchestrator | 2025-03-11 00:19:51.738097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:19:51.741099 | orchestrator | Tuesday 11 March 2025 00:19:51 +0000 (0:00:00.402) 0:00:13.042 ********* 2025-03-11 00:19:52.139289 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:52.140323 | orchestrator | 2025-03-11 00:19:52.383102 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:19:52.383197 | orchestrator | Tuesday 11 March 2025 00:19:52 +0000 (0:00:00.406) 0:00:13.448 ********* 2025-03-11 00:19:52.383226 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:19:52.383432 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:19:52.384429 | orchestrator | 2025-03-11 00:19:52.387627 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:19:52.388111 | orchestrator | Tuesday 11 March 2025 00:19:52 +0000 (0:00:00.243) 0:00:13.691 ********* 2025-03-11 00:19:52.751933 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:52.753487 | orchestrator | 2025-03-11 00:19:52.753708 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:19:52.761802 | orchestrator | Tuesday 11 March 2025 00:19:52 +0000 (0:00:00.368) 0:00:14.059 ********* 2025-03-11 00:19:52.920358 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:52.922235 | orchestrator | 2025-03-11 00:19:52.922981 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:19:52.925053 | orchestrator | Tuesday 11 March 2025 00:19:52 +0000 (0:00:00.169) 0:00:14.229 ********* 2025-03-11 00:19:53.080797 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:53.081558 | orchestrator | 2025-03-11 00:19:53.082745 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:19:53.083046 | orchestrator | Tuesday 11 March 2025 00:19:53 +0000 (0:00:00.160) 0:00:14.390 ********* 2025-03-11 00:19:53.246926 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:53.248180 | orchestrator | 2025-03-11 00:19:53.250806 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:19:53.254956 | orchestrator | Tuesday 11 March 2025 00:19:53 +0000 (0:00:00.163) 0:00:14.553 ********* 2025-03-11 00:19:53.505785 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a6d50340-d3ef-59a1-9773-9878296a9d55'}}) 2025-03-11 00:19:53.505964 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}}) 2025-03-11 00:19:53.507317 | orchestrator | 2025-03-11 00:19:53.508638 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:19:53.701934 | orchestrator | Tuesday 11 March 2025 00:19:53 +0000 (0:00:00.259) 0:00:14.813 ********* 2025-03-11 00:19:53.702063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a6d50340-d3ef-59a1-9773-9878296a9d55'}})  2025-03-11 00:19:53.702428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}})  2025-03-11 00:19:53.704897 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:53.706193 | orchestrator | 2025-03-11 00:19:53.707203 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:19:53.708013 | orchestrator | Tuesday 11 March 2025 00:19:53 +0000 (0:00:00.196) 0:00:15.009 ********* 2025-03-11 00:19:53.911791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a6d50340-d3ef-59a1-9773-9878296a9d55'}})  2025-03-11 00:19:53.912925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}})  2025-03-11 00:19:53.912959 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:54.100450 | orchestrator | 2025-03-11 00:19:54.100500 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:19:54.100516 | orchestrator | Tuesday 11 March 2025 00:19:53 +0000 (0:00:00.207) 0:00:15.217 ********* 2025-03-11 00:19:54.100582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a6d50340-d3ef-59a1-9773-9878296a9d55'}})  2025-03-11 00:19:54.101399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}})  2025-03-11 00:19:54.104373 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:54.255298 | orchestrator | 2025-03-11 00:19:54.255386 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:19:54.255403 | orchestrator | Tuesday 11 March 2025 00:19:54 +0000 (0:00:00.188) 0:00:15.406 ********* 2025-03-11 00:19:54.255429 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:54.255512 | orchestrator | 2025-03-11 00:19:54.256994 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:19:54.257880 | orchestrator | Tuesday 11 March 2025 00:19:54 +0000 (0:00:00.157) 0:00:15.563 ********* 2025-03-11 00:19:54.399012 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:19:54.399168 | orchestrator | 2025-03-11 00:19:54.400327 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:19:54.400962 | orchestrator | Tuesday 11 March 2025 00:19:54 +0000 (0:00:00.144) 0:00:15.708 ********* 2025-03-11 00:19:54.556875 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:54.558128 | orchestrator | 2025-03-11 00:19:54.558448 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:19:54.558879 | orchestrator | Tuesday 11 March 2025 00:19:54 +0000 (0:00:00.158) 0:00:15.866 ********* 2025-03-11 00:19:54.709958 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:54.710878 | orchestrator | 2025-03-11 00:19:54.711019 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:19:54.711130 | orchestrator | Tuesday 11 March 2025 00:19:54 +0000 (0:00:00.153) 0:00:16.020 ********* 2025-03-11 00:19:55.105432 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:55.105814 | orchestrator | 2025-03-11 00:19:55.105929 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:19:55.106481 | orchestrator | Tuesday 11 March 2025 00:19:55 +0000 (0:00:00.395) 0:00:16.415 ********* 2025-03-11 00:19:55.263896 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 00:19:55.264311 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:19:55.265354 | orchestrator |  "sdb": { 2025-03-11 00:19:55.266373 | orchestrator |  "osd_lvm_uuid": "a6d50340-d3ef-59a1-9773-9878296a9d55" 2025-03-11 00:19:55.267310 | orchestrator |  }, 2025-03-11 00:19:55.268209 | orchestrator |  "sdc": { 2025-03-11 00:19:55.269358 | orchestrator |  "osd_lvm_uuid": "7cf9be44-5ddc-5078-ba28-c8dfc9bc1211" 2025-03-11 00:19:55.270695 | orchestrator |  } 2025-03-11 00:19:55.271710 | orchestrator |  } 2025-03-11 00:19:55.274209 | orchestrator | } 2025-03-11 00:19:55.275475 | orchestrator | 2025-03-11 00:19:55.276772 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:19:55.277058 | orchestrator | Tuesday 11 March 2025 00:19:55 +0000 (0:00:00.158) 0:00:16.573 ********* 2025-03-11 00:19:55.434678 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:55.434847 | orchestrator | 2025-03-11 00:19:55.434875 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:19:55.435139 | orchestrator | Tuesday 11 March 2025 00:19:55 +0000 (0:00:00.170) 0:00:16.744 ********* 2025-03-11 00:19:55.587962 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:55.588117 | orchestrator | 2025-03-11 00:19:55.589031 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:19:55.589452 | orchestrator | Tuesday 11 March 2025 00:19:55 +0000 (0:00:00.152) 0:00:16.896 ********* 2025-03-11 00:19:55.752236 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:19:55.752947 | orchestrator | 2025-03-11 00:19:55.754214 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:19:55.754716 | orchestrator | Tuesday 11 March 2025 00:19:55 +0000 (0:00:00.160) 0:00:17.057 ********* 2025-03-11 00:19:56.094345 | orchestrator | changed: [testbed-node-3] => { 2025-03-11 00:19:56.095299 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:19:56.096015 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:19:56.097151 | orchestrator |  "sdb": { 2025-03-11 00:19:56.097411 | orchestrator |  "osd_lvm_uuid": "a6d50340-d3ef-59a1-9773-9878296a9d55" 2025-03-11 00:19:56.099620 | orchestrator |  }, 2025-03-11 00:19:56.100480 | orchestrator |  "sdc": { 2025-03-11 00:19:56.100875 | orchestrator |  "osd_lvm_uuid": "7cf9be44-5ddc-5078-ba28-c8dfc9bc1211" 2025-03-11 00:19:56.101695 | orchestrator |  } 2025-03-11 00:19:56.102735 | orchestrator |  }, 2025-03-11 00:19:56.103086 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:19:56.103951 | orchestrator |  { 2025-03-11 00:19:56.105369 | orchestrator |  "data": "osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55", 2025-03-11 00:19:56.106352 | orchestrator |  "data_vg": "ceph-a6d50340-d3ef-59a1-9773-9878296a9d55" 2025-03-11 00:19:56.106815 | orchestrator |  }, 2025-03-11 00:19:56.108022 | orchestrator |  { 2025-03-11 00:19:56.108438 | orchestrator |  "data": "osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211", 2025-03-11 00:19:56.109616 | orchestrator |  "data_vg": "ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211" 2025-03-11 00:19:56.110110 | orchestrator |  } 2025-03-11 00:19:56.110631 | orchestrator |  ] 2025-03-11 00:19:56.111217 | orchestrator |  } 2025-03-11 00:19:56.112039 | orchestrator | } 2025-03-11 00:19:56.112593 | orchestrator | 2025-03-11 00:19:56.113332 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:19:56.113761 | orchestrator | Tuesday 11 March 2025 00:19:56 +0000 (0:00:00.345) 0:00:17.403 ********* 2025-03-11 00:19:58.506009 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 00:19:58.816224 | orchestrator | 2025-03-11 00:19:58.816332 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:19:58.816351 | orchestrator | 2025-03-11 00:19:58.816366 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:19:58.816380 | orchestrator | Tuesday 11 March 2025 00:19:58 +0000 (0:00:02.408) 0:00:19.812 ********* 2025-03-11 00:19:58.816410 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 00:19:58.817745 | orchestrator | 2025-03-11 00:19:58.820622 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:19:58.824459 | orchestrator | Tuesday 11 March 2025 00:19:58 +0000 (0:00:00.312) 0:00:20.125 ********* 2025-03-11 00:19:59.095690 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:19:59.097118 | orchestrator | 2025-03-11 00:19:59.097665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:59.097682 | orchestrator | Tuesday 11 March 2025 00:19:59 +0000 (0:00:00.280) 0:00:20.405 ********* 2025-03-11 00:19:59.577870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-11 00:19:59.578795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-11 00:19:59.580202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-11 00:19:59.580580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-11 00:19:59.581668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-11 00:19:59.582809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-11 00:19:59.583967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-11 00:19:59.585011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-11 00:19:59.586111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-11 00:19:59.586622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-11 00:19:59.587903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-11 00:19:59.588601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-11 00:19:59.589025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-11 00:19:59.590007 | orchestrator | 2025-03-11 00:19:59.590452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:19:59.591161 | orchestrator | Tuesday 11 March 2025 00:19:59 +0000 (0:00:00.477) 0:00:20.883 ********* 2025-03-11 00:19:59.829977 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:19:59.832806 | orchestrator | 2025-03-11 00:19:59.836228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:00.051566 | orchestrator | Tuesday 11 March 2025 00:19:59 +0000 (0:00:00.255) 0:00:21.138 ********* 2025-03-11 00:20:00.051701 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:00.052831 | orchestrator | 2025-03-11 00:20:00.052872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:00.288838 | orchestrator | Tuesday 11 March 2025 00:20:00 +0000 (0:00:00.222) 0:00:21.360 ********* 2025-03-11 00:20:00.288967 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:00.291869 | orchestrator | 2025-03-11 00:20:00.294177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:00.294610 | orchestrator | Tuesday 11 March 2025 00:20:00 +0000 (0:00:00.235) 0:00:21.596 ********* 2025-03-11 00:20:00.810241 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:00.811584 | orchestrator | 2025-03-11 00:20:00.811880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:00.812560 | orchestrator | Tuesday 11 March 2025 00:20:00 +0000 (0:00:00.522) 0:00:22.119 ********* 2025-03-11 00:20:01.020775 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:01.021269 | orchestrator | 2025-03-11 00:20:01.022909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:01.023478 | orchestrator | Tuesday 11 March 2025 00:20:01 +0000 (0:00:00.210) 0:00:22.330 ********* 2025-03-11 00:20:01.228648 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:01.230352 | orchestrator | 2025-03-11 00:20:01.231826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:01.236260 | orchestrator | Tuesday 11 March 2025 00:20:01 +0000 (0:00:00.207) 0:00:22.537 ********* 2025-03-11 00:20:01.456340 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:01.460677 | orchestrator | 2025-03-11 00:20:01.465523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:01.687985 | orchestrator | Tuesday 11 March 2025 00:20:01 +0000 (0:00:00.222) 0:00:22.759 ********* 2025-03-11 00:20:01.688104 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:01.689269 | orchestrator | 2025-03-11 00:20:01.692141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:01.693976 | orchestrator | Tuesday 11 March 2025 00:20:01 +0000 (0:00:00.236) 0:00:22.996 ********* 2025-03-11 00:20:02.145473 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_66c0b794-c9eb-432e-948e-a9141cffb78f) 2025-03-11 00:20:02.613941 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_66c0b794-c9eb-432e-948e-a9141cffb78f) 2025-03-11 00:20:02.614143 | orchestrator | 2025-03-11 00:20:02.614167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:02.614183 | orchestrator | Tuesday 11 March 2025 00:20:02 +0000 (0:00:00.454) 0:00:23.450 ********* 2025-03-11 00:20:02.614213 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7de028b3-7e0d-4688-b625-ea2556c506ce) 2025-03-11 00:20:02.614831 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7de028b3-7e0d-4688-b625-ea2556c506ce) 2025-03-11 00:20:02.616028 | orchestrator | 2025-03-11 00:20:02.617833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:02.618094 | orchestrator | Tuesday 11 March 2025 00:20:02 +0000 (0:00:00.472) 0:00:23.922 ********* 2025-03-11 00:20:03.131642 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ca5ef8e-9fe9-400b-8f24-d393273052c7) 2025-03-11 00:20:03.132605 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ca5ef8e-9fe9-400b-8f24-d393273052c7) 2025-03-11 00:20:03.133272 | orchestrator | 2025-03-11 00:20:03.134689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:03.135468 | orchestrator | Tuesday 11 March 2025 00:20:03 +0000 (0:00:00.518) 0:00:24.440 ********* 2025-03-11 00:20:03.934386 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9bc23235-2266-4af6-bdbc-90727a536515) 2025-03-11 00:20:03.935450 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9bc23235-2266-4af6-bdbc-90727a536515) 2025-03-11 00:20:03.936425 | orchestrator | 2025-03-11 00:20:03.937678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:03.938915 | orchestrator | Tuesday 11 March 2025 00:20:03 +0000 (0:00:00.802) 0:00:25.243 ********* 2025-03-11 00:20:04.733169 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:20:04.734222 | orchestrator | 2025-03-11 00:20:04.734345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:04.735129 | orchestrator | Tuesday 11 March 2025 00:20:04 +0000 (0:00:00.797) 0:00:26.040 ********* 2025-03-11 00:20:05.164848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-11 00:20:05.166225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-11 00:20:05.166267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-11 00:20:05.167699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-11 00:20:05.168912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-11 00:20:05.169856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-11 00:20:05.171079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-11 00:20:05.172413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-11 00:20:05.173144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-11 00:20:05.173843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-11 00:20:05.174366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-11 00:20:05.175084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-11 00:20:05.175682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-11 00:20:05.176153 | orchestrator | 2025-03-11 00:20:05.176910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:05.177215 | orchestrator | Tuesday 11 March 2025 00:20:05 +0000 (0:00:00.429) 0:00:26.470 ********* 2025-03-11 00:20:05.381880 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:05.382417 | orchestrator | 2025-03-11 00:20:05.382778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:05.383445 | orchestrator | Tuesday 11 March 2025 00:20:05 +0000 (0:00:00.215) 0:00:26.685 ********* 2025-03-11 00:20:05.627420 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:05.628082 | orchestrator | 2025-03-11 00:20:05.628127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:05.630419 | orchestrator | Tuesday 11 March 2025 00:20:05 +0000 (0:00:00.247) 0:00:26.933 ********* 2025-03-11 00:20:05.859781 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:05.860285 | orchestrator | 2025-03-11 00:20:06.130214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:06.130324 | orchestrator | Tuesday 11 March 2025 00:20:05 +0000 (0:00:00.235) 0:00:27.169 ********* 2025-03-11 00:20:06.130358 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:06.130934 | orchestrator | 2025-03-11 00:20:06.132073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:06.132653 | orchestrator | Tuesday 11 March 2025 00:20:06 +0000 (0:00:00.270) 0:00:27.439 ********* 2025-03-11 00:20:06.333770 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:06.334999 | orchestrator | 2025-03-11 00:20:06.336035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:06.336332 | orchestrator | Tuesday 11 March 2025 00:20:06 +0000 (0:00:00.203) 0:00:27.643 ********* 2025-03-11 00:20:06.586148 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:06.587149 | orchestrator | 2025-03-11 00:20:06.587190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:06.588275 | orchestrator | Tuesday 11 March 2025 00:20:06 +0000 (0:00:00.251) 0:00:27.894 ********* 2025-03-11 00:20:06.838281 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:06.842219 | orchestrator | 2025-03-11 00:20:07.103497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:07.103642 | orchestrator | Tuesday 11 March 2025 00:20:06 +0000 (0:00:00.248) 0:00:28.143 ********* 2025-03-11 00:20:07.103672 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:07.104327 | orchestrator | 2025-03-11 00:20:07.105700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:07.106455 | orchestrator | Tuesday 11 March 2025 00:20:07 +0000 (0:00:00.268) 0:00:28.412 ********* 2025-03-11 00:20:08.272151 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-11 00:20:08.272440 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-11 00:20:08.275737 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-11 00:20:08.276871 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-11 00:20:08.276928 | orchestrator | 2025-03-11 00:20:08.277010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:08.278159 | orchestrator | Tuesday 11 March 2025 00:20:08 +0000 (0:00:01.167) 0:00:29.579 ********* 2025-03-11 00:20:08.499389 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:08.499524 | orchestrator | 2025-03-11 00:20:08.500251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:08.501001 | orchestrator | Tuesday 11 March 2025 00:20:08 +0000 (0:00:00.228) 0:00:29.808 ********* 2025-03-11 00:20:08.705776 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:08.708570 | orchestrator | 2025-03-11 00:20:08.709292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:08.709326 | orchestrator | Tuesday 11 March 2025 00:20:08 +0000 (0:00:00.204) 0:00:30.013 ********* 2025-03-11 00:20:08.937027 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:08.937584 | orchestrator | 2025-03-11 00:20:08.938325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:08.939002 | orchestrator | Tuesday 11 March 2025 00:20:08 +0000 (0:00:00.233) 0:00:30.246 ********* 2025-03-11 00:20:09.151434 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:09.151613 | orchestrator | 2025-03-11 00:20:09.152607 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:20:09.153365 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.215) 0:00:30.461 ********* 2025-03-11 00:20:09.336444 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:20:09.337196 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:20:09.337782 | orchestrator | 2025-03-11 00:20:09.338964 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:20:09.339860 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.184) 0:00:30.646 ********* 2025-03-11 00:20:09.491913 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:09.492333 | orchestrator | 2025-03-11 00:20:09.492661 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:20:09.630670 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.155) 0:00:30.801 ********* 2025-03-11 00:20:09.630752 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:09.630968 | orchestrator | 2025-03-11 00:20:09.631555 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:20:09.631942 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.139) 0:00:30.940 ********* 2025-03-11 00:20:09.765724 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:09.765865 | orchestrator | 2025-03-11 00:20:09.766502 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:20:09.766815 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.133) 0:00:31.074 ********* 2025-03-11 00:20:09.916297 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:20:09.918139 | orchestrator | 2025-03-11 00:20:09.919113 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:20:10.112691 | orchestrator | Tuesday 11 March 2025 00:20:09 +0000 (0:00:00.149) 0:00:31.224 ********* 2025-03-11 00:20:10.112791 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '97e17ca8-03b9-5252-bb63-cb66ff759452'}}) 2025-03-11 00:20:10.113848 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '334e37ea-3475-5092-ae3b-ad48e26f1952'}}) 2025-03-11 00:20:10.118075 | orchestrator | 2025-03-11 00:20:10.118951 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:20:10.120881 | orchestrator | Tuesday 11 March 2025 00:20:10 +0000 (0:00:00.196) 0:00:31.421 ********* 2025-03-11 00:20:10.534131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '97e17ca8-03b9-5252-bb63-cb66ff759452'}})  2025-03-11 00:20:10.534764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '334e37ea-3475-5092-ae3b-ad48e26f1952'}})  2025-03-11 00:20:10.534796 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:10.534820 | orchestrator | 2025-03-11 00:20:10.534992 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:20:10.535691 | orchestrator | Tuesday 11 March 2025 00:20:10 +0000 (0:00:00.421) 0:00:31.843 ********* 2025-03-11 00:20:10.726859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '97e17ca8-03b9-5252-bb63-cb66ff759452'}})  2025-03-11 00:20:10.727385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '334e37ea-3475-5092-ae3b-ad48e26f1952'}})  2025-03-11 00:20:10.727413 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:10.727747 | orchestrator | 2025-03-11 00:20:10.728192 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:20:10.728760 | orchestrator | Tuesday 11 March 2025 00:20:10 +0000 (0:00:00.192) 0:00:32.035 ********* 2025-03-11 00:20:10.894348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '97e17ca8-03b9-5252-bb63-cb66ff759452'}})  2025-03-11 00:20:10.895101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '334e37ea-3475-5092-ae3b-ad48e26f1952'}})  2025-03-11 00:20:10.895283 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:10.896764 | orchestrator | 2025-03-11 00:20:10.898061 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:20:10.907877 | orchestrator | Tuesday 11 March 2025 00:20:10 +0000 (0:00:00.168) 0:00:32.204 ********* 2025-03-11 00:20:11.056615 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:20:11.056779 | orchestrator | 2025-03-11 00:20:11.057035 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:20:11.057265 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.161) 0:00:32.365 ********* 2025-03-11 00:20:11.205192 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:20:11.205387 | orchestrator | 2025-03-11 00:20:11.205631 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:20:11.205919 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.149) 0:00:32.515 ********* 2025-03-11 00:20:11.339075 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:11.339238 | orchestrator | 2025-03-11 00:20:11.340184 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:20:11.340613 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.133) 0:00:32.649 ********* 2025-03-11 00:20:11.476641 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:11.477584 | orchestrator | 2025-03-11 00:20:11.477880 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:20:11.478249 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.135) 0:00:32.785 ********* 2025-03-11 00:20:11.618403 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:11.618615 | orchestrator | 2025-03-11 00:20:11.619282 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:20:11.619910 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.138) 0:00:32.923 ********* 2025-03-11 00:20:11.764825 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 00:20:11.765194 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:20:11.766550 | orchestrator |  "sdb": { 2025-03-11 00:20:11.767056 | orchestrator |  "osd_lvm_uuid": "97e17ca8-03b9-5252-bb63-cb66ff759452" 2025-03-11 00:20:11.767917 | orchestrator |  }, 2025-03-11 00:20:11.768307 | orchestrator |  "sdc": { 2025-03-11 00:20:11.769002 | orchestrator |  "osd_lvm_uuid": "334e37ea-3475-5092-ae3b-ad48e26f1952" 2025-03-11 00:20:11.769807 | orchestrator |  } 2025-03-11 00:20:11.770271 | orchestrator |  } 2025-03-11 00:20:11.771486 | orchestrator | } 2025-03-11 00:20:11.772553 | orchestrator | 2025-03-11 00:20:11.772993 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:20:11.773639 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.149) 0:00:33.073 ********* 2025-03-11 00:20:11.898262 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:11.899220 | orchestrator | 2025-03-11 00:20:11.899486 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:20:11.900381 | orchestrator | Tuesday 11 March 2025 00:20:11 +0000 (0:00:00.134) 0:00:33.207 ********* 2025-03-11 00:20:12.035708 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:12.036579 | orchestrator | 2025-03-11 00:20:12.038754 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:20:12.177715 | orchestrator | Tuesday 11 March 2025 00:20:12 +0000 (0:00:00.135) 0:00:33.342 ********* 2025-03-11 00:20:12.177794 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:20:12.178249 | orchestrator | 2025-03-11 00:20:12.179306 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:20:12.180849 | orchestrator | Tuesday 11 March 2025 00:20:12 +0000 (0:00:00.144) 0:00:33.487 ********* 2025-03-11 00:20:12.698974 | orchestrator | changed: [testbed-node-4] => { 2025-03-11 00:20:12.699627 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:20:12.700670 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:20:12.701490 | orchestrator |  "sdb": { 2025-03-11 00:20:12.702266 | orchestrator |  "osd_lvm_uuid": "97e17ca8-03b9-5252-bb63-cb66ff759452" 2025-03-11 00:20:12.703468 | orchestrator |  }, 2025-03-11 00:20:12.704197 | orchestrator |  "sdc": { 2025-03-11 00:20:12.705455 | orchestrator |  "osd_lvm_uuid": "334e37ea-3475-5092-ae3b-ad48e26f1952" 2025-03-11 00:20:12.706413 | orchestrator |  } 2025-03-11 00:20:12.706784 | orchestrator |  }, 2025-03-11 00:20:12.707175 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:20:12.707611 | orchestrator |  { 2025-03-11 00:20:12.708455 | orchestrator |  "data": "osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452", 2025-03-11 00:20:12.708891 | orchestrator |  "data_vg": "ceph-97e17ca8-03b9-5252-bb63-cb66ff759452" 2025-03-11 00:20:12.709040 | orchestrator |  }, 2025-03-11 00:20:12.709740 | orchestrator |  { 2025-03-11 00:20:12.709969 | orchestrator |  "data": "osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952", 2025-03-11 00:20:12.710438 | orchestrator |  "data_vg": "ceph-334e37ea-3475-5092-ae3b-ad48e26f1952" 2025-03-11 00:20:12.711017 | orchestrator |  } 2025-03-11 00:20:12.711587 | orchestrator |  ] 2025-03-11 00:20:12.712022 | orchestrator |  } 2025-03-11 00:20:12.712790 | orchestrator | } 2025-03-11 00:20:12.713362 | orchestrator | 2025-03-11 00:20:12.713701 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:20:12.714221 | orchestrator | Tuesday 11 March 2025 00:20:12 +0000 (0:00:00.521) 0:00:34.008 ********* 2025-03-11 00:20:14.188600 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 00:20:14.189174 | orchestrator | 2025-03-11 00:20:14.191832 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:20:14.192640 | orchestrator | 2025-03-11 00:20:14.193727 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:20:14.193879 | orchestrator | Tuesday 11 March 2025 00:20:14 +0000 (0:00:01.487) 0:00:35.496 ********* 2025-03-11 00:20:14.441614 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 00:20:14.442075 | orchestrator | 2025-03-11 00:20:14.442107 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:20:14.442840 | orchestrator | Tuesday 11 March 2025 00:20:14 +0000 (0:00:00.254) 0:00:35.751 ********* 2025-03-11 00:20:15.117495 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:20:15.117903 | orchestrator | 2025-03-11 00:20:15.119202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:15.119456 | orchestrator | Tuesday 11 March 2025 00:20:15 +0000 (0:00:00.675) 0:00:36.427 ********* 2025-03-11 00:20:15.571405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-11 00:20:15.571845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-11 00:20:15.572470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-11 00:20:15.573443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-11 00:20:15.574791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-11 00:20:15.575343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-11 00:20:15.575850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-11 00:20:15.576156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-11 00:20:15.576664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-11 00:20:15.576926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-11 00:20:15.577317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-11 00:20:15.577749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-11 00:20:15.578095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-11 00:20:15.578481 | orchestrator | 2025-03-11 00:20:15.579322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:15.579737 | orchestrator | Tuesday 11 March 2025 00:20:15 +0000 (0:00:00.452) 0:00:36.880 ********* 2025-03-11 00:20:15.778346 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:15.778844 | orchestrator | 2025-03-11 00:20:15.779885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:15.780178 | orchestrator | Tuesday 11 March 2025 00:20:15 +0000 (0:00:00.205) 0:00:37.086 ********* 2025-03-11 00:20:16.006911 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:16.007293 | orchestrator | 2025-03-11 00:20:16.008899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:16.009557 | orchestrator | Tuesday 11 March 2025 00:20:16 +0000 (0:00:00.229) 0:00:37.315 ********* 2025-03-11 00:20:16.211291 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:16.211436 | orchestrator | 2025-03-11 00:20:16.212139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:16.212708 | orchestrator | Tuesday 11 March 2025 00:20:16 +0000 (0:00:00.205) 0:00:37.520 ********* 2025-03-11 00:20:16.447554 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:16.447803 | orchestrator | 2025-03-11 00:20:16.448094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:16.448700 | orchestrator | Tuesday 11 March 2025 00:20:16 +0000 (0:00:00.237) 0:00:37.757 ********* 2025-03-11 00:20:16.676497 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:16.676976 | orchestrator | 2025-03-11 00:20:16.677939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:16.678576 | orchestrator | Tuesday 11 March 2025 00:20:16 +0000 (0:00:00.228) 0:00:37.986 ********* 2025-03-11 00:20:16.881331 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:16.883286 | orchestrator | 2025-03-11 00:20:16.884518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:17.102077 | orchestrator | Tuesday 11 March 2025 00:20:16 +0000 (0:00:00.201) 0:00:38.187 ********* 2025-03-11 00:20:17.102215 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:17.102572 | orchestrator | 2025-03-11 00:20:17.104064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:17.104313 | orchestrator | Tuesday 11 March 2025 00:20:17 +0000 (0:00:00.224) 0:00:38.412 ********* 2025-03-11 00:20:17.311981 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:17.313120 | orchestrator | 2025-03-11 00:20:17.314136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:17.314743 | orchestrator | Tuesday 11 March 2025 00:20:17 +0000 (0:00:00.207) 0:00:38.620 ********* 2025-03-11 00:20:18.014118 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71229aa9-8ade-48b7-b965-f199404d9b59) 2025-03-11 00:20:18.014275 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71229aa9-8ade-48b7-b965-f199404d9b59) 2025-03-11 00:20:18.014656 | orchestrator | 2025-03-11 00:20:18.014957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:18.015238 | orchestrator | Tuesday 11 March 2025 00:20:18 +0000 (0:00:00.702) 0:00:39.322 ********* 2025-03-11 00:20:18.512263 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a404a76d-1978-41bb-a69d-8095668152b7) 2025-03-11 00:20:18.513494 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a404a76d-1978-41bb-a69d-8095668152b7) 2025-03-11 00:20:18.515470 | orchestrator | 2025-03-11 00:20:18.516262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:18.516294 | orchestrator | Tuesday 11 March 2025 00:20:18 +0000 (0:00:00.497) 0:00:39.820 ********* 2025-03-11 00:20:18.984119 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5) 2025-03-11 00:20:18.985180 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5) 2025-03-11 00:20:18.985223 | orchestrator | 2025-03-11 00:20:18.985612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:18.986376 | orchestrator | Tuesday 11 March 2025 00:20:18 +0000 (0:00:00.473) 0:00:40.293 ********* 2025-03-11 00:20:19.502372 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4dc4161b-8ed1-4e64-9782-2a846a023c92) 2025-03-11 00:20:19.502559 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4dc4161b-8ed1-4e64-9782-2a846a023c92) 2025-03-11 00:20:19.504314 | orchestrator | 2025-03-11 00:20:19.505898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:20:19.507306 | orchestrator | Tuesday 11 March 2025 00:20:19 +0000 (0:00:00.516) 0:00:40.810 ********* 2025-03-11 00:20:19.849386 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:20:19.850290 | orchestrator | 2025-03-11 00:20:19.850994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:19.851868 | orchestrator | Tuesday 11 March 2025 00:20:19 +0000 (0:00:00.348) 0:00:41.159 ********* 2025-03-11 00:20:20.270249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-11 00:20:20.271599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-11 00:20:20.272509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-11 00:20:20.274771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-11 00:20:20.275425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-11 00:20:20.275776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-11 00:20:20.276002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-11 00:20:20.276717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-11 00:20:20.277085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-11 00:20:20.278920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-11 00:20:20.279250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-11 00:20:20.279760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-11 00:20:20.280067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-11 00:20:20.280757 | orchestrator | 2025-03-11 00:20:20.281079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:20.281418 | orchestrator | Tuesday 11 March 2025 00:20:20 +0000 (0:00:00.420) 0:00:41.579 ********* 2025-03-11 00:20:20.479317 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:20.480749 | orchestrator | 2025-03-11 00:20:20.481980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:20.484553 | orchestrator | Tuesday 11 March 2025 00:20:20 +0000 (0:00:00.209) 0:00:41.789 ********* 2025-03-11 00:20:20.707954 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:20.708074 | orchestrator | 2025-03-11 00:20:20.708809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:20.709182 | orchestrator | Tuesday 11 March 2025 00:20:20 +0000 (0:00:00.228) 0:00:42.017 ********* 2025-03-11 00:20:20.933999 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:20.934776 | orchestrator | 2025-03-11 00:20:20.935093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:20.935978 | orchestrator | Tuesday 11 March 2025 00:20:20 +0000 (0:00:00.225) 0:00:42.243 ********* 2025-03-11 00:20:21.550367 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:21.550738 | orchestrator | 2025-03-11 00:20:21.550808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:21.551458 | orchestrator | Tuesday 11 March 2025 00:20:21 +0000 (0:00:00.615) 0:00:42.858 ********* 2025-03-11 00:20:21.745484 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:21.746412 | orchestrator | 2025-03-11 00:20:21.746450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:21.747141 | orchestrator | Tuesday 11 March 2025 00:20:21 +0000 (0:00:00.196) 0:00:43.054 ********* 2025-03-11 00:20:21.952376 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:21.952899 | orchestrator | 2025-03-11 00:20:21.953386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:21.954559 | orchestrator | Tuesday 11 March 2025 00:20:21 +0000 (0:00:00.207) 0:00:43.262 ********* 2025-03-11 00:20:22.188054 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:22.188715 | orchestrator | 2025-03-11 00:20:22.189743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:22.191919 | orchestrator | Tuesday 11 March 2025 00:20:22 +0000 (0:00:00.234) 0:00:43.496 ********* 2025-03-11 00:20:22.392426 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:22.392594 | orchestrator | 2025-03-11 00:20:22.392624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:22.392815 | orchestrator | Tuesday 11 March 2025 00:20:22 +0000 (0:00:00.204) 0:00:43.701 ********* 2025-03-11 00:20:23.102268 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-11 00:20:23.103073 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-11 00:20:23.103862 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-11 00:20:23.104795 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-11 00:20:23.105946 | orchestrator | 2025-03-11 00:20:23.106086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:23.106848 | orchestrator | Tuesday 11 March 2025 00:20:23 +0000 (0:00:00.711) 0:00:44.412 ********* 2025-03-11 00:20:23.327988 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:23.328097 | orchestrator | 2025-03-11 00:20:23.328876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:23.329467 | orchestrator | Tuesday 11 March 2025 00:20:23 +0000 (0:00:00.224) 0:00:44.636 ********* 2025-03-11 00:20:23.584288 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:23.584822 | orchestrator | 2025-03-11 00:20:23.585058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:23.586174 | orchestrator | Tuesday 11 March 2025 00:20:23 +0000 (0:00:00.257) 0:00:44.894 ********* 2025-03-11 00:20:23.802642 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:23.803040 | orchestrator | 2025-03-11 00:20:23.804510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:20:23.807205 | orchestrator | Tuesday 11 March 2025 00:20:23 +0000 (0:00:00.217) 0:00:45.111 ********* 2025-03-11 00:20:24.018357 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:24.018493 | orchestrator | 2025-03-11 00:20:24.018716 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:20:24.019361 | orchestrator | Tuesday 11 March 2025 00:20:24 +0000 (0:00:00.216) 0:00:45.327 ********* 2025-03-11 00:20:24.443567 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:20:24.444074 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:20:24.445026 | orchestrator | 2025-03-11 00:20:24.446155 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:20:24.446643 | orchestrator | Tuesday 11 March 2025 00:20:24 +0000 (0:00:00.423) 0:00:45.751 ********* 2025-03-11 00:20:24.594331 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:24.595810 | orchestrator | 2025-03-11 00:20:24.596504 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:20:24.597600 | orchestrator | Tuesday 11 March 2025 00:20:24 +0000 (0:00:00.151) 0:00:45.903 ********* 2025-03-11 00:20:24.762480 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:24.762659 | orchestrator | 2025-03-11 00:20:24.762686 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:20:24.763181 | orchestrator | Tuesday 11 March 2025 00:20:24 +0000 (0:00:00.167) 0:00:46.070 ********* 2025-03-11 00:20:24.926855 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:24.927006 | orchestrator | 2025-03-11 00:20:24.927728 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:20:24.928125 | orchestrator | Tuesday 11 March 2025 00:20:24 +0000 (0:00:00.165) 0:00:46.236 ********* 2025-03-11 00:20:25.092072 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:20:25.092251 | orchestrator | 2025-03-11 00:20:25.092688 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:20:25.092723 | orchestrator | Tuesday 11 March 2025 00:20:25 +0000 (0:00:00.165) 0:00:46.402 ********* 2025-03-11 00:20:25.303417 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cedb9017-cc47-5b88-9282-51f2e5626d00'}}) 2025-03-11 00:20:25.303934 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6525db2c-5f0a-5e5f-9376-d59b0a20baba'}}) 2025-03-11 00:20:25.304222 | orchestrator | 2025-03-11 00:20:25.304733 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:20:25.304934 | orchestrator | Tuesday 11 March 2025 00:20:25 +0000 (0:00:00.210) 0:00:46.612 ********* 2025-03-11 00:20:25.522568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cedb9017-cc47-5b88-9282-51f2e5626d00'}})  2025-03-11 00:20:25.523248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6525db2c-5f0a-5e5f-9376-d59b0a20baba'}})  2025-03-11 00:20:25.523931 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:25.524867 | orchestrator | 2025-03-11 00:20:25.525233 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:20:25.525587 | orchestrator | Tuesday 11 March 2025 00:20:25 +0000 (0:00:00.218) 0:00:46.831 ********* 2025-03-11 00:20:25.698211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cedb9017-cc47-5b88-9282-51f2e5626d00'}})  2025-03-11 00:20:25.698857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6525db2c-5f0a-5e5f-9376-d59b0a20baba'}})  2025-03-11 00:20:25.699581 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:25.700099 | orchestrator | 2025-03-11 00:20:25.700836 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:20:25.701016 | orchestrator | Tuesday 11 March 2025 00:20:25 +0000 (0:00:00.176) 0:00:47.007 ********* 2025-03-11 00:20:25.889874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cedb9017-cc47-5b88-9282-51f2e5626d00'}})  2025-03-11 00:20:25.893586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6525db2c-5f0a-5e5f-9376-d59b0a20baba'}})  2025-03-11 00:20:25.894144 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:25.894173 | orchestrator | 2025-03-11 00:20:25.894192 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:20:25.895600 | orchestrator | Tuesday 11 March 2025 00:20:25 +0000 (0:00:00.188) 0:00:47.196 ********* 2025-03-11 00:20:26.061341 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:20:26.061500 | orchestrator | 2025-03-11 00:20:26.061592 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:20:26.061615 | orchestrator | Tuesday 11 March 2025 00:20:26 +0000 (0:00:00.173) 0:00:47.370 ********* 2025-03-11 00:20:26.223926 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:20:26.224692 | orchestrator | 2025-03-11 00:20:26.224723 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:20:26.224746 | orchestrator | Tuesday 11 March 2025 00:20:26 +0000 (0:00:00.156) 0:00:47.527 ********* 2025-03-11 00:20:26.356165 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:26.356984 | orchestrator | 2025-03-11 00:20:26.358012 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:20:26.358788 | orchestrator | Tuesday 11 March 2025 00:20:26 +0000 (0:00:00.138) 0:00:47.666 ********* 2025-03-11 00:20:26.780416 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:26.780998 | orchestrator | 2025-03-11 00:20:26.781034 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:20:26.781829 | orchestrator | Tuesday 11 March 2025 00:20:26 +0000 (0:00:00.420) 0:00:48.086 ********* 2025-03-11 00:20:26.921760 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:26.922665 | orchestrator | 2025-03-11 00:20:26.924005 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:20:26.925489 | orchestrator | Tuesday 11 March 2025 00:20:26 +0000 (0:00:00.143) 0:00:48.229 ********* 2025-03-11 00:20:27.079648 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 00:20:27.080461 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:20:27.080928 | orchestrator |  "sdb": { 2025-03-11 00:20:27.082343 | orchestrator |  "osd_lvm_uuid": "cedb9017-cc47-5b88-9282-51f2e5626d00" 2025-03-11 00:20:27.082501 | orchestrator |  }, 2025-03-11 00:20:27.083217 | orchestrator |  "sdc": { 2025-03-11 00:20:27.086621 | orchestrator |  "osd_lvm_uuid": "6525db2c-5f0a-5e5f-9376-d59b0a20baba" 2025-03-11 00:20:27.086804 | orchestrator |  } 2025-03-11 00:20:27.086826 | orchestrator |  } 2025-03-11 00:20:27.086841 | orchestrator | } 2025-03-11 00:20:27.086855 | orchestrator | 2025-03-11 00:20:27.086874 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:20:27.087940 | orchestrator | Tuesday 11 March 2025 00:20:27 +0000 (0:00:00.159) 0:00:48.389 ********* 2025-03-11 00:20:27.240110 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:27.242966 | orchestrator | 2025-03-11 00:20:27.244019 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:20:27.244728 | orchestrator | Tuesday 11 March 2025 00:20:27 +0000 (0:00:00.159) 0:00:48.548 ********* 2025-03-11 00:20:27.391334 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:27.391731 | orchestrator | 2025-03-11 00:20:27.392125 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:20:27.392997 | orchestrator | Tuesday 11 March 2025 00:20:27 +0000 (0:00:00.152) 0:00:48.701 ********* 2025-03-11 00:20:27.539441 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:20:27.540061 | orchestrator | 2025-03-11 00:20:27.540375 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:20:27.540953 | orchestrator | Tuesday 11 March 2025 00:20:27 +0000 (0:00:00.147) 0:00:48.849 ********* 2025-03-11 00:20:27.829670 | orchestrator | changed: [testbed-node-5] => { 2025-03-11 00:20:27.830874 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:20:27.833858 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:20:27.834914 | orchestrator |  "sdb": { 2025-03-11 00:20:27.836169 | orchestrator |  "osd_lvm_uuid": "cedb9017-cc47-5b88-9282-51f2e5626d00" 2025-03-11 00:20:27.837229 | orchestrator |  }, 2025-03-11 00:20:27.838509 | orchestrator |  "sdc": { 2025-03-11 00:20:27.839243 | orchestrator |  "osd_lvm_uuid": "6525db2c-5f0a-5e5f-9376-d59b0a20baba" 2025-03-11 00:20:27.839783 | orchestrator |  } 2025-03-11 00:20:27.840455 | orchestrator |  }, 2025-03-11 00:20:27.840824 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:20:27.841202 | orchestrator |  { 2025-03-11 00:20:27.841676 | orchestrator |  "data": "osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00", 2025-03-11 00:20:27.842107 | orchestrator |  "data_vg": "ceph-cedb9017-cc47-5b88-9282-51f2e5626d00" 2025-03-11 00:20:27.842944 | orchestrator |  }, 2025-03-11 00:20:27.843738 | orchestrator |  { 2025-03-11 00:20:27.844435 | orchestrator |  "data": "osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba", 2025-03-11 00:20:27.844683 | orchestrator |  "data_vg": "ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba" 2025-03-11 00:20:27.845165 | orchestrator |  } 2025-03-11 00:20:27.845506 | orchestrator |  ] 2025-03-11 00:20:27.846057 | orchestrator |  } 2025-03-11 00:20:27.847325 | orchestrator | } 2025-03-11 00:20:27.847875 | orchestrator | 2025-03-11 00:20:27.847905 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:20:27.847924 | orchestrator | Tuesday 11 March 2025 00:20:27 +0000 (0:00:00.288) 0:00:49.138 ********* 2025-03-11 00:20:29.316092 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 00:20:29.316257 | orchestrator | 2025-03-11 00:20:29.316803 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:20:29.317288 | orchestrator | 2025-03-11 00:20:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:20:29.318716 | orchestrator | 2025-03-11 00:20:29 | INFO  | Please wait and do not abort execution. 2025-03-11 00:20:29.318743 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:20:29.319698 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:20:29.320504 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:20:29.321495 | orchestrator | 2025-03-11 00:20:29.322471 | orchestrator | 2025-03-11 00:20:29.323021 | orchestrator | 2025-03-11 00:20:29.323950 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 00:20:29.324294 | orchestrator | Tuesday 11 March 2025 00:20:29 +0000 (0:00:01.486) 0:00:50.624 ********* 2025-03-11 00:20:29.325110 | orchestrator | =============================================================================== 2025-03-11 00:20:29.325640 | orchestrator | Write configuration file ------------------------------------------------ 5.38s 2025-03-11 00:20:29.325809 | orchestrator | Add known partitions to the list of available block devices ------------- 1.55s 2025-03-11 00:20:29.328300 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2025-03-11 00:20:29.328708 | orchestrator | Get initial list of available block devices ----------------------------- 1.22s 2025-03-11 00:20:29.329071 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-03-11 00:20:29.329971 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-03-11 00:20:29.331113 | orchestrator | Print configuration data ------------------------------------------------ 1.16s 2025-03-11 00:20:29.331463 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-03-11 00:20:29.332877 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-03-11 00:20:29.334093 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2025-03-11 00:20:29.334387 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.85s 2025-03-11 00:20:29.335025 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.84s 2025-03-11 00:20:29.335454 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-03-11 00:20:29.336646 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-03-11 00:20:29.337441 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-03-11 00:20:29.339098 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-03-11 00:20:29.340965 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-03-11 00:20:29.341841 | orchestrator | Set WAL devices config data --------------------------------------------- 0.71s 2025-03-11 00:20:29.342570 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-03-11 00:20:29.343416 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.68s 2025-03-11 00:20:41.706715 | orchestrator | 2025-03-11 00:20:41 | INFO  | Task fefd757a-2e6c-4678-9b28-bc4d492dfdc7 is running in background. Output coming soon. 2025-03-11 01:20:44.470669 | orchestrator | 2025-03-11 01:20:44 | INFO  | Task c9780952-0827-4a2f-935f-e294027b1218 (ceph-create-lvm-devices) was prepared for execution. 2025-03-11 01:20:47.960385 | orchestrator | 2025-03-11 01:20:44 | INFO  | It takes a moment until task c9780952-0827-4a2f-935f-e294027b1218 (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-11 01:20:47.960497 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-11 01:20:48.540950 | orchestrator | 2025-03-11 01:20:48.541780 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:20:48.542201 | orchestrator | 2025-03-11 01:20:48.841780 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:20:48.841878 | orchestrator | Tuesday 11 March 2025 01:20:48 +0000 (0:00:00.501) 0:00:00.501 ********* 2025-03-11 01:20:48.841910 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 01:20:48.842180 | orchestrator | 2025-03-11 01:20:48.842832 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:20:48.843442 | orchestrator | Tuesday 11 March 2025 01:20:48 +0000 (0:00:00.301) 0:00:00.803 ********* 2025-03-11 01:20:49.071819 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:20:49.072254 | orchestrator | 2025-03-11 01:20:49.073393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:49.822461 | orchestrator | Tuesday 11 March 2025 01:20:49 +0000 (0:00:00.228) 0:00:01.032 ********* 2025-03-11 01:20:49.822652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-11 01:20:49.822988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-11 01:20:49.824092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-11 01:20:49.827329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-11 01:20:49.828056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-11 01:20:49.828856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-11 01:20:49.829622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-11 01:20:49.830254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-11 01:20:49.833793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-11 01:20:49.833866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-11 01:20:49.833883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-11 01:20:49.833901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-11 01:20:49.834248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-11 01:20:49.834703 | orchestrator | 2025-03-11 01:20:49.835127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:49.836333 | orchestrator | Tuesday 11 March 2025 01:20:49 +0000 (0:00:00.751) 0:00:01.783 ********* 2025-03-11 01:20:50.039957 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:50.040467 | orchestrator | 2025-03-11 01:20:50.042100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:50.042503 | orchestrator | Tuesday 11 March 2025 01:20:50 +0000 (0:00:00.215) 0:00:01.999 ********* 2025-03-11 01:20:50.256267 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:50.256677 | orchestrator | 2025-03-11 01:20:50.257482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:50.258186 | orchestrator | Tuesday 11 March 2025 01:20:50 +0000 (0:00:00.218) 0:00:02.218 ********* 2025-03-11 01:20:50.455734 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:50.456005 | orchestrator | 2025-03-11 01:20:50.456738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:50.457189 | orchestrator | Tuesday 11 March 2025 01:20:50 +0000 (0:00:00.199) 0:00:02.417 ********* 2025-03-11 01:20:50.655827 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:50.656923 | orchestrator | 2025-03-11 01:20:50.657421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:50.658140 | orchestrator | Tuesday 11 March 2025 01:20:50 +0000 (0:00:00.200) 0:00:02.617 ********* 2025-03-11 01:20:50.865393 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:50.865891 | orchestrator | 2025-03-11 01:20:50.867040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:50.867292 | orchestrator | Tuesday 11 March 2025 01:20:50 +0000 (0:00:00.208) 0:00:02.826 ********* 2025-03-11 01:20:51.078392 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:51.079896 | orchestrator | 2025-03-11 01:20:51.079927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:51.080371 | orchestrator | Tuesday 11 March 2025 01:20:51 +0000 (0:00:00.210) 0:00:03.037 ********* 2025-03-11 01:20:51.337886 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:51.338586 | orchestrator | 2025-03-11 01:20:51.341280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:51.341950 | orchestrator | Tuesday 11 March 2025 01:20:51 +0000 (0:00:00.260) 0:00:03.298 ********* 2025-03-11 01:20:51.540781 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:51.541710 | orchestrator | 2025-03-11 01:20:51.542949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:51.545275 | orchestrator | Tuesday 11 March 2025 01:20:51 +0000 (0:00:00.204) 0:00:03.502 ********* 2025-03-11 01:20:52.186833 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_194aeec8-a038-4d98-ad9f-169d629e88aa) 2025-03-11 01:20:52.187059 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_194aeec8-a038-4d98-ad9f-169d629e88aa) 2025-03-11 01:20:52.187400 | orchestrator | 2025-03-11 01:20:52.187458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:52.187520 | orchestrator | Tuesday 11 March 2025 01:20:52 +0000 (0:00:00.644) 0:00:04.146 ********* 2025-03-11 01:20:53.040824 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed3d5c7a-4300-47cf-88fa-db7e232461c4) 2025-03-11 01:20:53.497626 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed3d5c7a-4300-47cf-88fa-db7e232461c4) 2025-03-11 01:20:53.497731 | orchestrator | 2025-03-11 01:20:53.497748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:53.497763 | orchestrator | Tuesday 11 March 2025 01:20:53 +0000 (0:00:00.853) 0:00:05.000 ********* 2025-03-11 01:20:53.497793 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ef8fa17c-1885-4415-b267-a55d447b75a1) 2025-03-11 01:20:53.498116 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ef8fa17c-1885-4415-b267-a55d447b75a1) 2025-03-11 01:20:53.498183 | orchestrator | 2025-03-11 01:20:53.498274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:53.498333 | orchestrator | Tuesday 11 March 2025 01:20:53 +0000 (0:00:00.458) 0:00:05.459 ********* 2025-03-11 01:20:54.024544 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4a50e71d-fbe2-4470-bd50-185934b47889) 2025-03-11 01:20:54.025510 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4a50e71d-fbe2-4470-bd50-185934b47889) 2025-03-11 01:20:54.025544 | orchestrator | 2025-03-11 01:20:54.025592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:20:54.026207 | orchestrator | Tuesday 11 March 2025 01:20:54 +0000 (0:00:00.526) 0:00:05.985 ********* 2025-03-11 01:20:54.385157 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:20:54.385905 | orchestrator | 2025-03-11 01:20:54.388259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:54.388787 | orchestrator | Tuesday 11 March 2025 01:20:54 +0000 (0:00:00.359) 0:00:06.345 ********* 2025-03-11 01:20:54.964292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-11 01:20:54.964709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-11 01:20:54.965043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-11 01:20:54.965671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-11 01:20:54.967363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-11 01:20:54.969219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-11 01:20:54.969680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-11 01:20:54.969707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-11 01:20:54.969726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-11 01:20:54.970384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-11 01:20:54.971026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-11 01:20:54.971464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-11 01:20:54.971929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-11 01:20:54.972189 | orchestrator | 2025-03-11 01:20:54.972837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:54.973176 | orchestrator | Tuesday 11 March 2025 01:20:54 +0000 (0:00:00.580) 0:00:06.926 ********* 2025-03-11 01:20:55.178907 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:55.179423 | orchestrator | 2025-03-11 01:20:55.181779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:55.392452 | orchestrator | Tuesday 11 March 2025 01:20:55 +0000 (0:00:00.212) 0:00:07.138 ********* 2025-03-11 01:20:55.392625 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:55.393112 | orchestrator | 2025-03-11 01:20:55.393247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:55.395688 | orchestrator | Tuesday 11 March 2025 01:20:55 +0000 (0:00:00.215) 0:00:07.353 ********* 2025-03-11 01:20:55.594510 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:55.595857 | orchestrator | 2025-03-11 01:20:55.596659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:55.597717 | orchestrator | Tuesday 11 March 2025 01:20:55 +0000 (0:00:00.201) 0:00:07.555 ********* 2025-03-11 01:20:55.846874 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:55.847279 | orchestrator | 2025-03-11 01:20:55.847317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:55.847667 | orchestrator | Tuesday 11 March 2025 01:20:55 +0000 (0:00:00.252) 0:00:07.808 ********* 2025-03-11 01:20:56.546842 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:56.549320 | orchestrator | 2025-03-11 01:20:56.551303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:56.551340 | orchestrator | Tuesday 11 March 2025 01:20:56 +0000 (0:00:00.698) 0:00:08.506 ********* 2025-03-11 01:20:56.752036 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:56.753313 | orchestrator | 2025-03-11 01:20:56.754280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:56.754552 | orchestrator | Tuesday 11 March 2025 01:20:56 +0000 (0:00:00.206) 0:00:08.713 ********* 2025-03-11 01:20:56.970724 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:56.971256 | orchestrator | 2025-03-11 01:20:56.972018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:56.972833 | orchestrator | Tuesday 11 March 2025 01:20:56 +0000 (0:00:00.218) 0:00:08.932 ********* 2025-03-11 01:20:57.214855 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:57.216436 | orchestrator | 2025-03-11 01:20:57.216840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:57.216871 | orchestrator | Tuesday 11 March 2025 01:20:57 +0000 (0:00:00.241) 0:00:09.174 ********* 2025-03-11 01:20:57.966472 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-11 01:20:57.967405 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-11 01:20:57.967852 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-11 01:20:57.967884 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-11 01:20:57.967991 | orchestrator | 2025-03-11 01:20:57.968805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:57.969076 | orchestrator | Tuesday 11 March 2025 01:20:57 +0000 (0:00:00.752) 0:00:09.926 ********* 2025-03-11 01:20:58.178442 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:58.179148 | orchestrator | 2025-03-11 01:20:58.180284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:58.180318 | orchestrator | Tuesday 11 March 2025 01:20:58 +0000 (0:00:00.212) 0:00:10.138 ********* 2025-03-11 01:20:58.379640 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:58.380072 | orchestrator | 2025-03-11 01:20:58.381018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:58.382426 | orchestrator | Tuesday 11 March 2025 01:20:58 +0000 (0:00:00.201) 0:00:10.340 ********* 2025-03-11 01:20:58.591786 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:58.593529 | orchestrator | 2025-03-11 01:20:58.594318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:20:58.595339 | orchestrator | Tuesday 11 March 2025 01:20:58 +0000 (0:00:00.211) 0:00:10.552 ********* 2025-03-11 01:20:58.822549 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:58.823150 | orchestrator | 2025-03-11 01:20:58.823918 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:20:58.824048 | orchestrator | Tuesday 11 March 2025 01:20:58 +0000 (0:00:00.231) 0:00:10.784 ********* 2025-03-11 01:20:58.951846 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:20:58.952693 | orchestrator | 2025-03-11 01:20:58.954974 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:20:58.956199 | orchestrator | Tuesday 11 March 2025 01:20:58 +0000 (0:00:00.128) 0:00:10.912 ********* 2025-03-11 01:20:59.168354 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a6d50340-d3ef-59a1-9773-9878296a9d55'}}) 2025-03-11 01:20:59.169641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}}) 2025-03-11 01:20:59.169683 | orchestrator | 2025-03-11 01:20:59.170108 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:20:59.170729 | orchestrator | Tuesday 11 March 2025 01:20:59 +0000 (0:00:00.216) 0:00:11.129 ********* 2025-03-11 01:21:01.743809 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'}) 2025-03-11 01:21:01.746779 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}) 2025-03-11 01:21:01.746821 | orchestrator | 2025-03-11 01:21:01.748836 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:21:01.749029 | orchestrator | Tuesday 11 March 2025 01:21:01 +0000 (0:00:02.572) 0:00:13.702 ********* 2025-03-11 01:21:01.947958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:01.948960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:01.949897 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:01.951161 | orchestrator | 2025-03-11 01:21:01.951466 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:21:01.952170 | orchestrator | Tuesday 11 March 2025 01:21:01 +0000 (0:00:00.206) 0:00:13.908 ********* 2025-03-11 01:21:03.537514 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'}) 2025-03-11 01:21:03.688876 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}) 2025-03-11 01:21:03.688951 | orchestrator | 2025-03-11 01:21:03.688969 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:21:03.688985 | orchestrator | Tuesday 11 March 2025 01:21:03 +0000 (0:00:01.579) 0:00:15.488 ********* 2025-03-11 01:21:03.689012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:03.689329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:03.690150 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:03.690604 | orchestrator | 2025-03-11 01:21:03.691617 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:21:03.691848 | orchestrator | Tuesday 11 March 2025 01:21:03 +0000 (0:00:00.162) 0:00:15.650 ********* 2025-03-11 01:21:03.845050 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:03.845345 | orchestrator | 2025-03-11 01:21:03.846101 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:21:03.847703 | orchestrator | Tuesday 11 March 2025 01:21:03 +0000 (0:00:00.155) 0:00:15.806 ********* 2025-03-11 01:21:04.000859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:04.002397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:04.002673 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:04.003718 | orchestrator | 2025-03-11 01:21:04.004097 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:21:04.004670 | orchestrator | Tuesday 11 March 2025 01:21:03 +0000 (0:00:00.156) 0:00:15.962 ********* 2025-03-11 01:21:04.149037 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:04.150129 | orchestrator | 2025-03-11 01:21:04.150685 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:21:04.152083 | orchestrator | Tuesday 11 March 2025 01:21:04 +0000 (0:00:00.148) 0:00:16.111 ********* 2025-03-11 01:21:04.326157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:04.329068 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:04.329386 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:04.332824 | orchestrator | 2025-03-11 01:21:04.333539 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:21:04.334252 | orchestrator | Tuesday 11 March 2025 01:21:04 +0000 (0:00:00.173) 0:00:16.284 ********* 2025-03-11 01:21:04.678440 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:04.678906 | orchestrator | 2025-03-11 01:21:04.679310 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:21:04.679859 | orchestrator | Tuesday 11 March 2025 01:21:04 +0000 (0:00:00.355) 0:00:16.640 ********* 2025-03-11 01:21:04.865048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:04.865164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:04.868891 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:04.868989 | orchestrator | 2025-03-11 01:21:04.869008 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:21:04.869027 | orchestrator | Tuesday 11 March 2025 01:21:04 +0000 (0:00:00.184) 0:00:16.824 ********* 2025-03-11 01:21:05.008145 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:05.009139 | orchestrator | 2025-03-11 01:21:05.009401 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:21:05.010233 | orchestrator | Tuesday 11 March 2025 01:21:04 +0000 (0:00:00.144) 0:00:16.969 ********* 2025-03-11 01:21:05.196912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:05.197436 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:05.198184 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:05.199062 | orchestrator | 2025-03-11 01:21:05.199715 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:21:05.376170 | orchestrator | Tuesday 11 March 2025 01:21:05 +0000 (0:00:00.188) 0:00:17.157 ********* 2025-03-11 01:21:05.376238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:05.376471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:05.377415 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:05.378785 | orchestrator | 2025-03-11 01:21:05.379615 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:21:05.379650 | orchestrator | Tuesday 11 March 2025 01:21:05 +0000 (0:00:00.179) 0:00:17.337 ********* 2025-03-11 01:21:05.568079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:05.569515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:05.573201 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:05.573668 | orchestrator | 2025-03-11 01:21:05.574829 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:21:05.575525 | orchestrator | Tuesday 11 March 2025 01:21:05 +0000 (0:00:00.190) 0:00:17.528 ********* 2025-03-11 01:21:05.739159 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:05.739429 | orchestrator | 2025-03-11 01:21:05.740366 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:21:05.873038 | orchestrator | Tuesday 11 March 2025 01:21:05 +0000 (0:00:00.172) 0:00:17.701 ********* 2025-03-11 01:21:05.873103 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:05.874064 | orchestrator | 2025-03-11 01:21:05.874358 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:21:05.877191 | orchestrator | Tuesday 11 March 2025 01:21:05 +0000 (0:00:00.132) 0:00:17.833 ********* 2025-03-11 01:21:06.021540 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:06.021769 | orchestrator | 2025-03-11 01:21:06.022851 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:21:06.023680 | orchestrator | Tuesday 11 March 2025 01:21:06 +0000 (0:00:00.147) 0:00:17.981 ********* 2025-03-11 01:21:06.171498 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:21:06.172432 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:21:06.174123 | orchestrator | } 2025-03-11 01:21:06.174726 | orchestrator | 2025-03-11 01:21:06.175513 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:21:06.176369 | orchestrator | Tuesday 11 March 2025 01:21:06 +0000 (0:00:00.152) 0:00:18.133 ********* 2025-03-11 01:21:06.330965 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:21:06.331246 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:21:06.332006 | orchestrator | } 2025-03-11 01:21:06.332423 | orchestrator | 2025-03-11 01:21:06.332658 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:21:06.333133 | orchestrator | Tuesday 11 March 2025 01:21:06 +0000 (0:00:00.149) 0:00:18.282 ********* 2025-03-11 01:21:06.470485 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:21:06.471056 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:21:06.471091 | orchestrator | } 2025-03-11 01:21:06.471381 | orchestrator | 2025-03-11 01:21:06.472093 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:21:07.419644 | orchestrator | Tuesday 11 March 2025 01:21:06 +0000 (0:00:00.149) 0:00:18.432 ********* 2025-03-11 01:21:07.419775 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:07.420419 | orchestrator | 2025-03-11 01:21:07.420459 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:21:07.420678 | orchestrator | Tuesday 11 March 2025 01:21:07 +0000 (0:00:00.947) 0:00:19.380 ********* 2025-03-11 01:21:07.989239 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:08.538483 | orchestrator | 2025-03-11 01:21:08.538679 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:21:08.538701 | orchestrator | Tuesday 11 March 2025 01:21:07 +0000 (0:00:00.548) 0:00:19.929 ********* 2025-03-11 01:21:08.538760 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:08.538844 | orchestrator | 2025-03-11 01:21:08.540597 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:21:08.542588 | orchestrator | Tuesday 11 March 2025 01:21:08 +0000 (0:00:00.569) 0:00:20.498 ********* 2025-03-11 01:21:08.693394 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:08.694542 | orchestrator | 2025-03-11 01:21:08.694620 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:21:08.695757 | orchestrator | Tuesday 11 March 2025 01:21:08 +0000 (0:00:00.153) 0:00:20.651 ********* 2025-03-11 01:21:08.824283 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:08.824643 | orchestrator | 2025-03-11 01:21:08.824872 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:21:08.824907 | orchestrator | Tuesday 11 March 2025 01:21:08 +0000 (0:00:00.133) 0:00:20.785 ********* 2025-03-11 01:21:08.945248 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:08.946094 | orchestrator | 2025-03-11 01:21:08.946435 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:21:08.946503 | orchestrator | Tuesday 11 March 2025 01:21:08 +0000 (0:00:00.121) 0:00:20.906 ********* 2025-03-11 01:21:09.081630 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:21:09.082371 | orchestrator |  "vgs_report": { 2025-03-11 01:21:09.082718 | orchestrator |  "vg": [] 2025-03-11 01:21:09.082749 | orchestrator |  } 2025-03-11 01:21:09.083359 | orchestrator | } 2025-03-11 01:21:09.083761 | orchestrator | 2025-03-11 01:21:09.084600 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:21:09.084694 | orchestrator | Tuesday 11 March 2025 01:21:09 +0000 (0:00:00.137) 0:00:21.044 ********* 2025-03-11 01:21:09.209240 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:09.209691 | orchestrator | 2025-03-11 01:21:09.210006 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:21:09.210204 | orchestrator | Tuesday 11 March 2025 01:21:09 +0000 (0:00:00.126) 0:00:21.170 ********* 2025-03-11 01:21:09.377620 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:09.378618 | orchestrator | 2025-03-11 01:21:09.380456 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:21:09.380547 | orchestrator | Tuesday 11 March 2025 01:21:09 +0000 (0:00:00.168) 0:00:21.338 ********* 2025-03-11 01:21:09.521772 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:09.522630 | orchestrator | 2025-03-11 01:21:09.522745 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:21:09.523720 | orchestrator | Tuesday 11 March 2025 01:21:09 +0000 (0:00:00.144) 0:00:21.483 ********* 2025-03-11 01:21:09.663512 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:09.663841 | orchestrator | 2025-03-11 01:21:09.664472 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:21:09.665994 | orchestrator | Tuesday 11 March 2025 01:21:09 +0000 (0:00:00.134) 0:00:21.618 ********* 2025-03-11 01:21:10.021984 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.022170 | orchestrator | 2025-03-11 01:21:10.022195 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:21:10.022915 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.363) 0:00:21.981 ********* 2025-03-11 01:21:10.178915 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.179380 | orchestrator | 2025-03-11 01:21:10.180305 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:21:10.182060 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.156) 0:00:22.138 ********* 2025-03-11 01:21:10.323817 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.324276 | orchestrator | 2025-03-11 01:21:10.324606 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:21:10.325109 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.146) 0:00:22.285 ********* 2025-03-11 01:21:10.464847 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.465100 | orchestrator | 2025-03-11 01:21:10.466194 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:21:10.466965 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.138) 0:00:22.423 ********* 2025-03-11 01:21:10.609008 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.609219 | orchestrator | 2025-03-11 01:21:10.609487 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:21:10.609870 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.147) 0:00:22.570 ********* 2025-03-11 01:21:10.742960 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.743414 | orchestrator | 2025-03-11 01:21:10.744391 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:21:10.744920 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.133) 0:00:22.705 ********* 2025-03-11 01:21:10.879095 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:10.880612 | orchestrator | 2025-03-11 01:21:10.882870 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:21:10.884153 | orchestrator | Tuesday 11 March 2025 01:21:10 +0000 (0:00:00.134) 0:00:22.839 ********* 2025-03-11 01:21:11.028804 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:11.029217 | orchestrator | 2025-03-11 01:21:11.030920 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:21:11.031123 | orchestrator | Tuesday 11 March 2025 01:21:11 +0000 (0:00:00.147) 0:00:22.987 ********* 2025-03-11 01:21:11.175954 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:11.179640 | orchestrator | 2025-03-11 01:21:11.179689 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:21:11.180881 | orchestrator | Tuesday 11 March 2025 01:21:11 +0000 (0:00:00.148) 0:00:23.136 ********* 2025-03-11 01:21:11.319360 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:11.319663 | orchestrator | 2025-03-11 01:21:11.320539 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:21:11.321500 | orchestrator | Tuesday 11 March 2025 01:21:11 +0000 (0:00:00.143) 0:00:23.279 ********* 2025-03-11 01:21:11.520514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:11.520744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:11.521883 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:11.524214 | orchestrator | 2025-03-11 01:21:11.524671 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:21:11.524706 | orchestrator | Tuesday 11 March 2025 01:21:11 +0000 (0:00:00.201) 0:00:23.481 ********* 2025-03-11 01:21:11.692345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:11.693013 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:11.695906 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:12.101194 | orchestrator | 2025-03-11 01:21:12.101393 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:21:12.101414 | orchestrator | Tuesday 11 March 2025 01:21:11 +0000 (0:00:00.170) 0:00:23.652 ********* 2025-03-11 01:21:12.101443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:12.101525 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:12.102471 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:12.103200 | orchestrator | 2025-03-11 01:21:12.103773 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:21:12.104554 | orchestrator | Tuesday 11 March 2025 01:21:12 +0000 (0:00:00.405) 0:00:24.057 ********* 2025-03-11 01:21:12.306003 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:12.307077 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:12.307520 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:12.307978 | orchestrator | 2025-03-11 01:21:12.308372 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:21:12.308889 | orchestrator | Tuesday 11 March 2025 01:21:12 +0000 (0:00:00.209) 0:00:24.267 ********* 2025-03-11 01:21:12.475677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:12.476909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:12.479610 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:12.480327 | orchestrator | 2025-03-11 01:21:12.481509 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:21:12.482196 | orchestrator | Tuesday 11 March 2025 01:21:12 +0000 (0:00:00.168) 0:00:24.436 ********* 2025-03-11 01:21:12.646285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:12.648027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:12.649300 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:12.650738 | orchestrator | 2025-03-11 01:21:12.651883 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:21:12.652077 | orchestrator | Tuesday 11 March 2025 01:21:12 +0000 (0:00:00.169) 0:00:24.606 ********* 2025-03-11 01:21:12.866851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:12.867714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:12.871431 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:13.060328 | orchestrator | 2025-03-11 01:21:13.060374 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:21:13.060391 | orchestrator | Tuesday 11 March 2025 01:21:12 +0000 (0:00:00.220) 0:00:24.826 ********* 2025-03-11 01:21:13.060414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:13.060897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:13.061365 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:13.062503 | orchestrator | 2025-03-11 01:21:13.063122 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:21:13.063705 | orchestrator | Tuesday 11 March 2025 01:21:13 +0000 (0:00:00.194) 0:00:25.020 ********* 2025-03-11 01:21:13.605168 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:13.605610 | orchestrator | 2025-03-11 01:21:13.605698 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:21:14.162483 | orchestrator | Tuesday 11 March 2025 01:21:13 +0000 (0:00:00.545) 0:00:25.566 ********* 2025-03-11 01:21:14.162661 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:14.163410 | orchestrator | 2025-03-11 01:21:14.166129 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:21:14.307640 | orchestrator | Tuesday 11 March 2025 01:21:14 +0000 (0:00:00.555) 0:00:26.121 ********* 2025-03-11 01:21:14.307728 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:21:14.311016 | orchestrator | 2025-03-11 01:21:14.311955 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:21:14.313398 | orchestrator | Tuesday 11 March 2025 01:21:14 +0000 (0:00:00.146) 0:00:26.268 ********* 2025-03-11 01:21:14.508807 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'vg_name': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'}) 2025-03-11 01:21:14.510011 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'vg_name': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'}) 2025-03-11 01:21:14.511362 | orchestrator | 2025-03-11 01:21:14.513452 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:21:14.514873 | orchestrator | Tuesday 11 March 2025 01:21:14 +0000 (0:00:00.201) 0:00:26.469 ********* 2025-03-11 01:21:14.897859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:14.898783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:14.899374 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:14.900870 | orchestrator | 2025-03-11 01:21:14.901619 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:21:14.902347 | orchestrator | Tuesday 11 March 2025 01:21:14 +0000 (0:00:00.388) 0:00:26.858 ********* 2025-03-11 01:21:15.098686 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:15.098808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:15.099124 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:15.099554 | orchestrator | 2025-03-11 01:21:15.100868 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:21:15.282334 | orchestrator | Tuesday 11 March 2025 01:21:15 +0000 (0:00:00.199) 0:00:27.058 ********* 2025-03-11 01:21:15.282380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55', 'data_vg': 'ceph-a6d50340-d3ef-59a1-9773-9878296a9d55'})  2025-03-11 01:21:15.284154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211', 'data_vg': 'ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211'})  2025-03-11 01:21:15.285913 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:21:15.287035 | orchestrator | 2025-03-11 01:21:15.287754 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:21:15.288398 | orchestrator | Tuesday 11 March 2025 01:21:15 +0000 (0:00:00.184) 0:00:27.242 ********* 2025-03-11 01:21:16.042296 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:21:16.042708 | orchestrator |  "lvm_report": { 2025-03-11 01:21:16.043528 | orchestrator |  "lv": [ 2025-03-11 01:21:16.044456 | orchestrator |  { 2025-03-11 01:21:16.044801 | orchestrator |  "lv_name": "osd-block-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211", 2025-03-11 01:21:16.046526 | orchestrator |  "vg_name": "ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211" 2025-03-11 01:21:16.049529 | orchestrator |  }, 2025-03-11 01:21:16.050110 | orchestrator |  { 2025-03-11 01:21:16.050333 | orchestrator |  "lv_name": "osd-block-a6d50340-d3ef-59a1-9773-9878296a9d55", 2025-03-11 01:21:16.052054 | orchestrator |  "vg_name": "ceph-a6d50340-d3ef-59a1-9773-9878296a9d55" 2025-03-11 01:21:16.052097 | orchestrator |  } 2025-03-11 01:21:16.052128 | orchestrator |  ], 2025-03-11 01:21:16.052140 | orchestrator |  "pv": [ 2025-03-11 01:21:16.052516 | orchestrator |  { 2025-03-11 01:21:16.055254 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:21:16.055416 | orchestrator |  "vg_name": "ceph-a6d50340-d3ef-59a1-9773-9878296a9d55" 2025-03-11 01:21:16.055441 | orchestrator |  }, 2025-03-11 01:21:16.056215 | orchestrator |  { 2025-03-11 01:21:16.057203 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:21:16.057598 | orchestrator |  "vg_name": "ceph-7cf9be44-5ddc-5078-ba28-c8dfc9bc1211" 2025-03-11 01:21:16.057980 | orchestrator |  } 2025-03-11 01:21:16.058297 | orchestrator |  ] 2025-03-11 01:21:16.058925 | orchestrator |  } 2025-03-11 01:21:16.059220 | orchestrator | } 2025-03-11 01:21:16.059664 | orchestrator | 2025-03-11 01:21:16.060084 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:21:16.060462 | orchestrator | 2025-03-11 01:21:16.060895 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:21:16.061210 | orchestrator | Tuesday 11 March 2025 01:21:16 +0000 (0:00:00.761) 0:00:28.003 ********* 2025-03-11 01:21:16.669169 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 01:21:16.670279 | orchestrator | 2025-03-11 01:21:16.670627 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:21:16.671411 | orchestrator | Tuesday 11 March 2025 01:21:16 +0000 (0:00:00.626) 0:00:28.630 ********* 2025-03-11 01:21:16.903059 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:16.904520 | orchestrator | 2025-03-11 01:21:16.904819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:16.907369 | orchestrator | Tuesday 11 March 2025 01:21:16 +0000 (0:00:00.234) 0:00:28.864 ********* 2025-03-11 01:21:17.422362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-11 01:21:17.425871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-11 01:21:17.426310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-11 01:21:17.426921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-11 01:21:17.428007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-11 01:21:17.429238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-11 01:21:17.430414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-11 01:21:17.431409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-11 01:21:17.432662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-11 01:21:17.433133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-11 01:21:17.434232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-11 01:21:17.435058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-11 01:21:17.436484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-11 01:21:17.436608 | orchestrator | 2025-03-11 01:21:17.437166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:17.437670 | orchestrator | Tuesday 11 March 2025 01:21:17 +0000 (0:00:00.516) 0:00:29.381 ********* 2025-03-11 01:21:17.632602 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:17.633245 | orchestrator | 2025-03-11 01:21:17.633933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:17.634700 | orchestrator | Tuesday 11 March 2025 01:21:17 +0000 (0:00:00.212) 0:00:29.593 ********* 2025-03-11 01:21:17.837982 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:17.838392 | orchestrator | 2025-03-11 01:21:17.838687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:17.839169 | orchestrator | Tuesday 11 March 2025 01:21:17 +0000 (0:00:00.206) 0:00:29.800 ********* 2025-03-11 01:21:18.102862 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:18.104460 | orchestrator | 2025-03-11 01:21:18.106617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:18.106973 | orchestrator | Tuesday 11 March 2025 01:21:18 +0000 (0:00:00.261) 0:00:30.061 ********* 2025-03-11 01:21:18.317825 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:18.317964 | orchestrator | 2025-03-11 01:21:18.318913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:18.319258 | orchestrator | Tuesday 11 March 2025 01:21:18 +0000 (0:00:00.217) 0:00:30.279 ********* 2025-03-11 01:21:18.528700 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:18.529770 | orchestrator | 2025-03-11 01:21:18.530214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:18.531053 | orchestrator | Tuesday 11 March 2025 01:21:18 +0000 (0:00:00.210) 0:00:30.489 ********* 2025-03-11 01:21:18.754265 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:18.754477 | orchestrator | 2025-03-11 01:21:18.755355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:18.756207 | orchestrator | Tuesday 11 March 2025 01:21:18 +0000 (0:00:00.226) 0:00:30.716 ********* 2025-03-11 01:21:18.968407 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:18.968547 | orchestrator | 2025-03-11 01:21:18.969601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:18.970440 | orchestrator | Tuesday 11 March 2025 01:21:18 +0000 (0:00:00.213) 0:00:30.929 ********* 2025-03-11 01:21:19.416283 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:19.416994 | orchestrator | 2025-03-11 01:21:19.417914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:19.419097 | orchestrator | Tuesday 11 March 2025 01:21:19 +0000 (0:00:00.445) 0:00:31.375 ********* 2025-03-11 01:21:19.892160 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_66c0b794-c9eb-432e-948e-a9141cffb78f) 2025-03-11 01:21:19.892659 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_66c0b794-c9eb-432e-948e-a9141cffb78f) 2025-03-11 01:21:19.893589 | orchestrator | 2025-03-11 01:21:19.894645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:19.895286 | orchestrator | Tuesday 11 March 2025 01:21:19 +0000 (0:00:00.477) 0:00:31.853 ********* 2025-03-11 01:21:20.452913 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7de028b3-7e0d-4688-b625-ea2556c506ce) 2025-03-11 01:21:20.454252 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7de028b3-7e0d-4688-b625-ea2556c506ce) 2025-03-11 01:21:20.454301 | orchestrator | 2025-03-11 01:21:20.455170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:20.455844 | orchestrator | Tuesday 11 March 2025 01:21:20 +0000 (0:00:00.559) 0:00:32.412 ********* 2025-03-11 01:21:20.937558 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2ca5ef8e-9fe9-400b-8f24-d393273052c7) 2025-03-11 01:21:20.938716 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2ca5ef8e-9fe9-400b-8f24-d393273052c7) 2025-03-11 01:21:20.941327 | orchestrator | 2025-03-11 01:21:21.412994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:21.413114 | orchestrator | Tuesday 11 March 2025 01:21:20 +0000 (0:00:00.486) 0:00:32.898 ********* 2025-03-11 01:21:21.413168 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9bc23235-2266-4af6-bdbc-90727a536515) 2025-03-11 01:21:21.773394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9bc23235-2266-4af6-bdbc-90727a536515) 2025-03-11 01:21:21.773498 | orchestrator | 2025-03-11 01:21:21.773516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:21.773558 | orchestrator | Tuesday 11 March 2025 01:21:21 +0000 (0:00:00.473) 0:00:33.372 ********* 2025-03-11 01:21:21.773620 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:21:21.773693 | orchestrator | 2025-03-11 01:21:21.773857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:21.774079 | orchestrator | Tuesday 11 March 2025 01:21:21 +0000 (0:00:00.363) 0:00:33.735 ********* 2025-03-11 01:21:22.283102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-11 01:21:22.283313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-11 01:21:22.284012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-11 01:21:22.284465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-11 01:21:22.285222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-11 01:21:22.285414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-11 01:21:22.286138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-11 01:21:22.287040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-11 01:21:22.287391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-11 01:21:22.288255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-11 01:21:22.288362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-11 01:21:22.289285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-11 01:21:22.289627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-11 01:21:22.290312 | orchestrator | 2025-03-11 01:21:22.290823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:22.291030 | orchestrator | Tuesday 11 March 2025 01:21:22 +0000 (0:00:00.508) 0:00:34.243 ********* 2025-03-11 01:21:22.475014 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:22.475171 | orchestrator | 2025-03-11 01:21:22.476470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:22.476963 | orchestrator | Tuesday 11 March 2025 01:21:22 +0000 (0:00:00.190) 0:00:34.434 ********* 2025-03-11 01:21:22.923696 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:22.924031 | orchestrator | 2025-03-11 01:21:22.924066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:22.924090 | orchestrator | Tuesday 11 March 2025 01:21:22 +0000 (0:00:00.449) 0:00:34.884 ********* 2025-03-11 01:21:23.131251 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:23.131615 | orchestrator | 2025-03-11 01:21:23.131693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:23.132001 | orchestrator | Tuesday 11 March 2025 01:21:23 +0000 (0:00:00.208) 0:00:35.092 ********* 2025-03-11 01:21:23.336669 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:23.337632 | orchestrator | 2025-03-11 01:21:23.338081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:23.341164 | orchestrator | Tuesday 11 March 2025 01:21:23 +0000 (0:00:00.203) 0:00:35.296 ********* 2025-03-11 01:21:23.536203 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:23.538175 | orchestrator | 2025-03-11 01:21:23.538542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:23.540804 | orchestrator | Tuesday 11 March 2025 01:21:23 +0000 (0:00:00.200) 0:00:35.496 ********* 2025-03-11 01:21:23.787374 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:23.787665 | orchestrator | 2025-03-11 01:21:23.788000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:23.788957 | orchestrator | Tuesday 11 March 2025 01:21:23 +0000 (0:00:00.251) 0:00:35.748 ********* 2025-03-11 01:21:23.991904 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:23.992095 | orchestrator | 2025-03-11 01:21:23.993067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:23.993139 | orchestrator | Tuesday 11 March 2025 01:21:23 +0000 (0:00:00.205) 0:00:35.953 ********* 2025-03-11 01:21:24.215187 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:24.215371 | orchestrator | 2025-03-11 01:21:24.216053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:24.216369 | orchestrator | Tuesday 11 March 2025 01:21:24 +0000 (0:00:00.222) 0:00:36.176 ********* 2025-03-11 01:21:24.980974 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-11 01:21:24.981769 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-11 01:21:24.981813 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-11 01:21:24.981929 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-11 01:21:24.981990 | orchestrator | 2025-03-11 01:21:24.982298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:24.982686 | orchestrator | Tuesday 11 March 2025 01:21:24 +0000 (0:00:00.766) 0:00:36.942 ********* 2025-03-11 01:21:25.204939 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:25.205488 | orchestrator | 2025-03-11 01:21:25.207768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:25.208489 | orchestrator | Tuesday 11 March 2025 01:21:25 +0000 (0:00:00.221) 0:00:37.164 ********* 2025-03-11 01:21:25.418666 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:25.419587 | orchestrator | 2025-03-11 01:21:25.421167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:25.422548 | orchestrator | Tuesday 11 March 2025 01:21:25 +0000 (0:00:00.215) 0:00:37.379 ********* 2025-03-11 01:21:25.655861 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:25.657626 | orchestrator | 2025-03-11 01:21:25.658750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:25.660589 | orchestrator | Tuesday 11 March 2025 01:21:25 +0000 (0:00:00.237) 0:00:37.617 ********* 2025-03-11 01:21:26.387878 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:26.388780 | orchestrator | 2025-03-11 01:21:26.388821 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:21:26.389029 | orchestrator | Tuesday 11 March 2025 01:21:26 +0000 (0:00:00.730) 0:00:38.347 ********* 2025-03-11 01:21:26.540477 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:26.540939 | orchestrator | 2025-03-11 01:21:26.541699 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:21:26.542500 | orchestrator | Tuesday 11 March 2025 01:21:26 +0000 (0:00:00.154) 0:00:38.502 ********* 2025-03-11 01:21:26.744225 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '97e17ca8-03b9-5252-bb63-cb66ff759452'}}) 2025-03-11 01:21:26.745646 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '334e37ea-3475-5092-ae3b-ad48e26f1952'}}) 2025-03-11 01:21:26.746819 | orchestrator | 2025-03-11 01:21:26.748646 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:21:26.749927 | orchestrator | Tuesday 11 March 2025 01:21:26 +0000 (0:00:00.202) 0:00:38.704 ********* 2025-03-11 01:21:28.972164 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'}) 2025-03-11 01:21:28.973118 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'}) 2025-03-11 01:21:28.976009 | orchestrator | 2025-03-11 01:21:28.977391 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:21:28.977985 | orchestrator | Tuesday 11 March 2025 01:21:28 +0000 (0:00:02.225) 0:00:40.929 ********* 2025-03-11 01:21:29.139555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:29.140681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:29.142212 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:29.143351 | orchestrator | 2025-03-11 01:21:29.144640 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:21:29.146058 | orchestrator | Tuesday 11 March 2025 01:21:29 +0000 (0:00:00.170) 0:00:41.100 ********* 2025-03-11 01:21:30.467922 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'}) 2025-03-11 01:21:30.468723 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'}) 2025-03-11 01:21:30.468764 | orchestrator | 2025-03-11 01:21:30.469326 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:21:30.469787 | orchestrator | Tuesday 11 March 2025 01:21:30 +0000 (0:00:01.327) 0:00:42.427 ********* 2025-03-11 01:21:30.626507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:30.629830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:30.629914 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:30.630721 | orchestrator | 2025-03-11 01:21:30.631316 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:21:30.631949 | orchestrator | Tuesday 11 March 2025 01:21:30 +0000 (0:00:00.160) 0:00:42.588 ********* 2025-03-11 01:21:30.776646 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:30.776987 | orchestrator | 2025-03-11 01:21:30.777599 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:21:30.778187 | orchestrator | Tuesday 11 March 2025 01:21:30 +0000 (0:00:00.149) 0:00:42.737 ********* 2025-03-11 01:21:30.959383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:30.959724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:30.960500 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:30.961041 | orchestrator | 2025-03-11 01:21:30.961906 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:21:30.962590 | orchestrator | Tuesday 11 March 2025 01:21:30 +0000 (0:00:00.184) 0:00:42.921 ********* 2025-03-11 01:21:31.304049 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:31.304197 | orchestrator | 2025-03-11 01:21:31.304828 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:21:31.305300 | orchestrator | Tuesday 11 March 2025 01:21:31 +0000 (0:00:00.338) 0:00:43.260 ********* 2025-03-11 01:21:31.464252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:31.464508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:31.465186 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:31.465623 | orchestrator | 2025-03-11 01:21:31.465942 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:21:31.466268 | orchestrator | Tuesday 11 March 2025 01:21:31 +0000 (0:00:00.165) 0:00:43.425 ********* 2025-03-11 01:21:31.614933 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:31.616199 | orchestrator | 2025-03-11 01:21:31.617101 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:21:31.618346 | orchestrator | Tuesday 11 March 2025 01:21:31 +0000 (0:00:00.148) 0:00:43.574 ********* 2025-03-11 01:21:31.795620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:31.795910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:31.796787 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:31.797386 | orchestrator | 2025-03-11 01:21:31.797798 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:21:31.799139 | orchestrator | Tuesday 11 March 2025 01:21:31 +0000 (0:00:00.181) 0:00:43.756 ********* 2025-03-11 01:21:31.952219 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:31.952338 | orchestrator | 2025-03-11 01:21:31.953056 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:21:31.953289 | orchestrator | Tuesday 11 March 2025 01:21:31 +0000 (0:00:00.157) 0:00:43.913 ********* 2025-03-11 01:21:32.111414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:32.111982 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:32.113479 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.116127 | orchestrator | 2025-03-11 01:21:32.116695 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:21:32.117532 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.157) 0:00:44.071 ********* 2025-03-11 01:21:32.281652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:32.282286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:32.283332 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.283875 | orchestrator | 2025-03-11 01:21:32.284882 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:21:32.285406 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.171) 0:00:44.243 ********* 2025-03-11 01:21:32.463282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:32.465273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:32.465716 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.467495 | orchestrator | 2025-03-11 01:21:32.468916 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:21:32.470069 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.179) 0:00:44.422 ********* 2025-03-11 01:21:32.613049 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.613827 | orchestrator | 2025-03-11 01:21:32.616128 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:21:32.617253 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.150) 0:00:44.573 ********* 2025-03-11 01:21:32.760190 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.761772 | orchestrator | 2025-03-11 01:21:32.764446 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:21:32.765660 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.146) 0:00:44.720 ********* 2025-03-11 01:21:32.896423 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:32.897845 | orchestrator | 2025-03-11 01:21:32.899263 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:21:32.901011 | orchestrator | Tuesday 11 March 2025 01:21:32 +0000 (0:00:00.136) 0:00:44.857 ********* 2025-03-11 01:21:33.305460 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:21:33.306447 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:21:33.308754 | orchestrator | } 2025-03-11 01:21:33.309770 | orchestrator | 2025-03-11 01:21:33.309798 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:21:33.309819 | orchestrator | Tuesday 11 March 2025 01:21:33 +0000 (0:00:00.408) 0:00:45.265 ********* 2025-03-11 01:21:33.455839 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:21:33.456302 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:21:33.457812 | orchestrator | } 2025-03-11 01:21:33.459466 | orchestrator | 2025-03-11 01:21:33.459845 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:21:33.461063 | orchestrator | Tuesday 11 March 2025 01:21:33 +0000 (0:00:00.151) 0:00:45.417 ********* 2025-03-11 01:21:33.606405 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:21:33.607287 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:21:33.607324 | orchestrator | } 2025-03-11 01:21:33.608012 | orchestrator | 2025-03-11 01:21:33.608557 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:21:33.608907 | orchestrator | Tuesday 11 March 2025 01:21:33 +0000 (0:00:00.151) 0:00:45.568 ********* 2025-03-11 01:21:34.125040 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:34.125237 | orchestrator | 2025-03-11 01:21:34.126107 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:21:34.126489 | orchestrator | Tuesday 11 March 2025 01:21:34 +0000 (0:00:00.517) 0:00:46.085 ********* 2025-03-11 01:21:34.670830 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:34.671052 | orchestrator | 2025-03-11 01:21:34.671736 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:21:34.672162 | orchestrator | Tuesday 11 March 2025 01:21:34 +0000 (0:00:00.546) 0:00:46.631 ********* 2025-03-11 01:21:35.210854 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:35.212406 | orchestrator | 2025-03-11 01:21:35.213154 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:21:35.215066 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.535) 0:00:47.166 ********* 2025-03-11 01:21:35.359845 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:35.360726 | orchestrator | 2025-03-11 01:21:35.361894 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:21:35.362926 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.154) 0:00:47.321 ********* 2025-03-11 01:21:35.481760 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:35.482605 | orchestrator | 2025-03-11 01:21:35.482676 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:21:35.482694 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.121) 0:00:47.442 ********* 2025-03-11 01:21:35.600089 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:35.600161 | orchestrator | 2025-03-11 01:21:35.600302 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:21:35.601470 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.118) 0:00:47.561 ********* 2025-03-11 01:21:35.746153 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:21:35.747648 | orchestrator |  "vgs_report": { 2025-03-11 01:21:35.749430 | orchestrator |  "vg": [] 2025-03-11 01:21:35.750502 | orchestrator |  } 2025-03-11 01:21:35.751265 | orchestrator | } 2025-03-11 01:21:35.751866 | orchestrator | 2025-03-11 01:21:35.752699 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:21:35.753198 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.146) 0:00:47.707 ********* 2025-03-11 01:21:35.891745 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:35.892467 | orchestrator | 2025-03-11 01:21:35.893591 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:21:35.894728 | orchestrator | Tuesday 11 March 2025 01:21:35 +0000 (0:00:00.144) 0:00:47.852 ********* 2025-03-11 01:21:36.274837 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:36.275140 | orchestrator | 2025-03-11 01:21:36.275167 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:21:36.276503 | orchestrator | Tuesday 11 March 2025 01:21:36 +0000 (0:00:00.381) 0:00:48.233 ********* 2025-03-11 01:21:36.458099 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:36.458984 | orchestrator | 2025-03-11 01:21:36.460619 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:21:36.461272 | orchestrator | Tuesday 11 March 2025 01:21:36 +0000 (0:00:00.184) 0:00:48.417 ********* 2025-03-11 01:21:36.612370 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:36.613480 | orchestrator | 2025-03-11 01:21:36.614131 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:21:36.615266 | orchestrator | Tuesday 11 March 2025 01:21:36 +0000 (0:00:00.153) 0:00:48.571 ********* 2025-03-11 01:21:36.760937 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:36.762486 | orchestrator | 2025-03-11 01:21:36.764121 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:21:36.764611 | orchestrator | Tuesday 11 March 2025 01:21:36 +0000 (0:00:00.151) 0:00:48.722 ********* 2025-03-11 01:21:36.895962 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:36.896812 | orchestrator | 2025-03-11 01:21:36.897668 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:21:36.898688 | orchestrator | Tuesday 11 March 2025 01:21:36 +0000 (0:00:00.135) 0:00:48.857 ********* 2025-03-11 01:21:37.048105 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.048833 | orchestrator | 2025-03-11 01:21:37.049510 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:21:37.049756 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.153) 0:00:49.010 ********* 2025-03-11 01:21:37.191286 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.191811 | orchestrator | 2025-03-11 01:21:37.193179 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:21:37.193956 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.140) 0:00:49.151 ********* 2025-03-11 01:21:37.323354 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.323689 | orchestrator | 2025-03-11 01:21:37.324907 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:21:37.325969 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.133) 0:00:49.284 ********* 2025-03-11 01:21:37.467129 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.467685 | orchestrator | 2025-03-11 01:21:37.469015 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:21:37.470761 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.144) 0:00:49.429 ********* 2025-03-11 01:21:37.637720 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.639799 | orchestrator | 2025-03-11 01:21:37.640816 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:21:37.642690 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.169) 0:00:49.598 ********* 2025-03-11 01:21:37.824668 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.825000 | orchestrator | 2025-03-11 01:21:37.825693 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:21:37.963220 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.187) 0:00:49.785 ********* 2025-03-11 01:21:37.963271 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:37.964276 | orchestrator | 2025-03-11 01:21:37.965229 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:21:37.965871 | orchestrator | Tuesday 11 March 2025 01:21:37 +0000 (0:00:00.136) 0:00:49.921 ********* 2025-03-11 01:21:38.343779 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:38.344803 | orchestrator | 2025-03-11 01:21:38.345123 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:21:38.345954 | orchestrator | Tuesday 11 March 2025 01:21:38 +0000 (0:00:00.382) 0:00:50.304 ********* 2025-03-11 01:21:38.554746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:38.555511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:38.557324 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:38.558300 | orchestrator | 2025-03-11 01:21:38.559148 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:21:38.559615 | orchestrator | Tuesday 11 March 2025 01:21:38 +0000 (0:00:00.211) 0:00:50.515 ********* 2025-03-11 01:21:38.745283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:38.745897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:38.745932 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:38.745950 | orchestrator | 2025-03-11 01:21:38.745974 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:21:38.746402 | orchestrator | Tuesday 11 March 2025 01:21:38 +0000 (0:00:00.189) 0:00:50.705 ********* 2025-03-11 01:21:38.938101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:38.938644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:38.939407 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:38.940220 | orchestrator | 2025-03-11 01:21:38.940320 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:21:38.940468 | orchestrator | Tuesday 11 March 2025 01:21:38 +0000 (0:00:00.193) 0:00:50.899 ********* 2025-03-11 01:21:39.124879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:39.125114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:39.126150 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:39.126240 | orchestrator | 2025-03-11 01:21:39.126687 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:21:39.127180 | orchestrator | Tuesday 11 March 2025 01:21:39 +0000 (0:00:00.187) 0:00:51.086 ********* 2025-03-11 01:21:39.292415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:39.292775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:39.293966 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:39.295041 | orchestrator | 2025-03-11 01:21:39.296031 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:21:39.297154 | orchestrator | Tuesday 11 March 2025 01:21:39 +0000 (0:00:00.164) 0:00:51.251 ********* 2025-03-11 01:21:39.481815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:39.483220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:39.485503 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:39.485832 | orchestrator | 2025-03-11 01:21:39.486608 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:21:39.488269 | orchestrator | Tuesday 11 March 2025 01:21:39 +0000 (0:00:00.189) 0:00:51.440 ********* 2025-03-11 01:21:39.663338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:39.664058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:39.664101 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:39.664319 | orchestrator | 2025-03-11 01:21:39.664714 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:21:39.665122 | orchestrator | Tuesday 11 March 2025 01:21:39 +0000 (0:00:00.184) 0:00:51.624 ********* 2025-03-11 01:21:39.838428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:39.839562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:39.840034 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:39.841197 | orchestrator | 2025-03-11 01:21:39.842188 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:21:39.843425 | orchestrator | Tuesday 11 March 2025 01:21:39 +0000 (0:00:00.172) 0:00:51.797 ********* 2025-03-11 01:21:40.450737 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:40.451046 | orchestrator | 2025-03-11 01:21:40.451531 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:21:40.452287 | orchestrator | Tuesday 11 March 2025 01:21:40 +0000 (0:00:00.613) 0:00:52.410 ********* 2025-03-11 01:21:40.995711 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:40.996414 | orchestrator | 2025-03-11 01:21:40.996454 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:21:40.997174 | orchestrator | Tuesday 11 March 2025 01:21:40 +0000 (0:00:00.545) 0:00:52.955 ********* 2025-03-11 01:21:41.393643 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:21:41.395053 | orchestrator | 2025-03-11 01:21:41.396388 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:21:41.396422 | orchestrator | Tuesday 11 March 2025 01:21:41 +0000 (0:00:00.397) 0:00:53.353 ********* 2025-03-11 01:21:41.579020 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'vg_name': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'}) 2025-03-11 01:21:41.579533 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'vg_name': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'}) 2025-03-11 01:21:41.579801 | orchestrator | 2025-03-11 01:21:41.580817 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:21:41.581599 | orchestrator | Tuesday 11 March 2025 01:21:41 +0000 (0:00:00.187) 0:00:53.540 ********* 2025-03-11 01:21:41.787207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:41.787634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:41.788087 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:41.788157 | orchestrator | 2025-03-11 01:21:41.788645 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:21:41.788712 | orchestrator | Tuesday 11 March 2025 01:21:41 +0000 (0:00:00.209) 0:00:53.749 ********* 2025-03-11 01:21:41.992477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:41.994906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:42.177359 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:42.177423 | orchestrator | 2025-03-11 01:21:42.177440 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:21:42.177455 | orchestrator | Tuesday 11 March 2025 01:21:41 +0000 (0:00:00.203) 0:00:53.953 ********* 2025-03-11 01:21:42.177480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452', 'data_vg': 'ceph-97e17ca8-03b9-5252-bb63-cb66ff759452'})  2025-03-11 01:21:42.178230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952', 'data_vg': 'ceph-334e37ea-3475-5092-ae3b-ad48e26f1952'})  2025-03-11 01:21:42.179993 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:21:42.181114 | orchestrator | 2025-03-11 01:21:42.181661 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:21:42.182459 | orchestrator | Tuesday 11 March 2025 01:21:42 +0000 (0:00:00.181) 0:00:54.135 ********* 2025-03-11 01:21:43.146218 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:21:43.146364 | orchestrator |  "lvm_report": { 2025-03-11 01:21:43.147217 | orchestrator |  "lv": [ 2025-03-11 01:21:43.148691 | orchestrator |  { 2025-03-11 01:21:43.148905 | orchestrator |  "lv_name": "osd-block-334e37ea-3475-5092-ae3b-ad48e26f1952", 2025-03-11 01:21:43.149378 | orchestrator |  "vg_name": "ceph-334e37ea-3475-5092-ae3b-ad48e26f1952" 2025-03-11 01:21:43.149954 | orchestrator |  }, 2025-03-11 01:21:43.150699 | orchestrator |  { 2025-03-11 01:21:43.152395 | orchestrator |  "lv_name": "osd-block-97e17ca8-03b9-5252-bb63-cb66ff759452", 2025-03-11 01:21:43.152972 | orchestrator |  "vg_name": "ceph-97e17ca8-03b9-5252-bb63-cb66ff759452" 2025-03-11 01:21:43.153183 | orchestrator |  } 2025-03-11 01:21:43.153882 | orchestrator |  ], 2025-03-11 01:21:43.154467 | orchestrator |  "pv": [ 2025-03-11 01:21:43.154736 | orchestrator |  { 2025-03-11 01:21:43.154933 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:21:43.155745 | orchestrator |  "vg_name": "ceph-97e17ca8-03b9-5252-bb63-cb66ff759452" 2025-03-11 01:21:43.156163 | orchestrator |  }, 2025-03-11 01:21:43.156845 | orchestrator |  { 2025-03-11 01:21:43.157248 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:21:43.157475 | orchestrator |  "vg_name": "ceph-334e37ea-3475-5092-ae3b-ad48e26f1952" 2025-03-11 01:21:43.157841 | orchestrator |  } 2025-03-11 01:21:43.158108 | orchestrator |  ] 2025-03-11 01:21:43.158421 | orchestrator |  } 2025-03-11 01:21:43.158930 | orchestrator | } 2025-03-11 01:21:43.159062 | orchestrator | 2025-03-11 01:21:43.159443 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:21:43.160223 | orchestrator | 2025-03-11 01:21:43.160434 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:21:43.160749 | orchestrator | Tuesday 11 March 2025 01:21:43 +0000 (0:00:00.967) 0:00:55.103 ********* 2025-03-11 01:21:43.398649 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 01:21:43.399734 | orchestrator | 2025-03-11 01:21:43.402500 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:21:43.403456 | orchestrator | Tuesday 11 March 2025 01:21:43 +0000 (0:00:00.254) 0:00:55.357 ********* 2025-03-11 01:21:43.654140 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:21:43.655453 | orchestrator | 2025-03-11 01:21:43.657322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:44.192014 | orchestrator | Tuesday 11 March 2025 01:21:43 +0000 (0:00:00.256) 0:00:55.614 ********* 2025-03-11 01:21:44.192184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-11 01:21:44.192257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-11 01:21:44.192710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-11 01:21:44.193494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-11 01:21:44.193784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-11 01:21:44.194183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-11 01:21:44.195067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-11 01:21:44.195425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-11 01:21:44.195847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-11 01:21:44.196012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-11 01:21:44.196700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-11 01:21:44.197069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-11 01:21:44.197534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-11 01:21:44.198191 | orchestrator | 2025-03-11 01:21:44.198291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:44.198601 | orchestrator | Tuesday 11 March 2025 01:21:44 +0000 (0:00:00.538) 0:00:56.152 ********* 2025-03-11 01:21:44.388133 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:44.388309 | orchestrator | 2025-03-11 01:21:44.388929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:44.389540 | orchestrator | Tuesday 11 March 2025 01:21:44 +0000 (0:00:00.195) 0:00:56.347 ********* 2025-03-11 01:21:44.602932 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:44.603797 | orchestrator | 2025-03-11 01:21:44.603846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:44.604212 | orchestrator | Tuesday 11 March 2025 01:21:44 +0000 (0:00:00.215) 0:00:56.563 ********* 2025-03-11 01:21:44.824057 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:44.824817 | orchestrator | 2025-03-11 01:21:44.826103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:44.826843 | orchestrator | Tuesday 11 March 2025 01:21:44 +0000 (0:00:00.221) 0:00:56.785 ********* 2025-03-11 01:21:45.032234 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:45.032477 | orchestrator | 2025-03-11 01:21:45.033107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:45.034383 | orchestrator | Tuesday 11 March 2025 01:21:45 +0000 (0:00:00.207) 0:00:56.993 ********* 2025-03-11 01:21:45.693943 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:45.694955 | orchestrator | 2025-03-11 01:21:45.695634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:45.696386 | orchestrator | Tuesday 11 March 2025 01:21:45 +0000 (0:00:00.657) 0:00:57.651 ********* 2025-03-11 01:21:45.908087 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:45.908645 | orchestrator | 2025-03-11 01:21:45.909461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:46.143235 | orchestrator | Tuesday 11 March 2025 01:21:45 +0000 (0:00:00.217) 0:00:57.869 ********* 2025-03-11 01:21:46.143358 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:46.143973 | orchestrator | 2025-03-11 01:21:46.145207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:46.145279 | orchestrator | Tuesday 11 March 2025 01:21:46 +0000 (0:00:00.236) 0:00:58.105 ********* 2025-03-11 01:21:46.385714 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:46.843252 | orchestrator | 2025-03-11 01:21:46.843309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:46.843323 | orchestrator | Tuesday 11 March 2025 01:21:46 +0000 (0:00:00.235) 0:00:58.341 ********* 2025-03-11 01:21:46.843336 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71229aa9-8ade-48b7-b965-f199404d9b59) 2025-03-11 01:21:46.843800 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71229aa9-8ade-48b7-b965-f199404d9b59) 2025-03-11 01:21:46.844176 | orchestrator | 2025-03-11 01:21:46.844870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:46.845116 | orchestrator | Tuesday 11 March 2025 01:21:46 +0000 (0:00:00.464) 0:00:58.805 ********* 2025-03-11 01:21:47.263684 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a404a76d-1978-41bb-a69d-8095668152b7) 2025-03-11 01:21:47.264962 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a404a76d-1978-41bb-a69d-8095668152b7) 2025-03-11 01:21:47.265030 | orchestrator | 2025-03-11 01:21:47.265964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:47.266721 | orchestrator | Tuesday 11 March 2025 01:21:47 +0000 (0:00:00.419) 0:00:59.224 ********* 2025-03-11 01:21:47.750296 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5) 2025-03-11 01:21:47.751000 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cb6ea6ed-5312-4391-a5a4-78c4bbaaccd5) 2025-03-11 01:21:47.751614 | orchestrator | 2025-03-11 01:21:47.752765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:47.753773 | orchestrator | Tuesday 11 March 2025 01:21:47 +0000 (0:00:00.485) 0:00:59.710 ********* 2025-03-11 01:21:48.238090 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4dc4161b-8ed1-4e64-9782-2a846a023c92) 2025-03-11 01:21:48.238233 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4dc4161b-8ed1-4e64-9782-2a846a023c92) 2025-03-11 01:21:48.238801 | orchestrator | 2025-03-11 01:21:48.238880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:21:48.239729 | orchestrator | Tuesday 11 March 2025 01:21:48 +0000 (0:00:00.485) 0:01:00.195 ********* 2025-03-11 01:21:48.593947 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:21:48.594595 | orchestrator | 2025-03-11 01:21:48.596823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:48.597789 | orchestrator | Tuesday 11 March 2025 01:21:48 +0000 (0:00:00.358) 0:01:00.554 ********* 2025-03-11 01:21:49.351884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-11 01:21:49.354689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-11 01:21:49.356799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-11 01:21:49.357626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-11 01:21:49.358375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-11 01:21:49.359069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-11 01:21:49.359898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-11 01:21:49.360265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-11 01:21:49.360689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-11 01:21:49.361458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-11 01:21:49.361960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-11 01:21:49.362349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-11 01:21:49.363180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-11 01:21:49.364536 | orchestrator | 2025-03-11 01:21:49.364675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:49.365850 | orchestrator | Tuesday 11 March 2025 01:21:49 +0000 (0:00:00.758) 0:01:01.312 ********* 2025-03-11 01:21:49.567229 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:49.567489 | orchestrator | 2025-03-11 01:21:49.568136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:49.569019 | orchestrator | Tuesday 11 March 2025 01:21:49 +0000 (0:00:00.216) 0:01:01.528 ********* 2025-03-11 01:21:49.796914 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:49.798280 | orchestrator | 2025-03-11 01:21:49.798978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:49.799746 | orchestrator | Tuesday 11 March 2025 01:21:49 +0000 (0:00:00.229) 0:01:01.758 ********* 2025-03-11 01:21:50.016768 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:50.017800 | orchestrator | 2025-03-11 01:21:50.018735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:50.019422 | orchestrator | Tuesday 11 March 2025 01:21:50 +0000 (0:00:00.219) 0:01:01.978 ********* 2025-03-11 01:21:50.222733 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:50.223498 | orchestrator | 2025-03-11 01:21:50.224348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:50.224627 | orchestrator | Tuesday 11 March 2025 01:21:50 +0000 (0:00:00.205) 0:01:02.183 ********* 2025-03-11 01:21:50.421188 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:50.425232 | orchestrator | 2025-03-11 01:21:50.427562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:50.612228 | orchestrator | Tuesday 11 March 2025 01:21:50 +0000 (0:00:00.192) 0:01:02.376 ********* 2025-03-11 01:21:50.612288 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:50.613462 | orchestrator | 2025-03-11 01:21:50.615519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:50.616989 | orchestrator | Tuesday 11 March 2025 01:21:50 +0000 (0:00:00.196) 0:01:02.573 ********* 2025-03-11 01:21:50.848931 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:50.850400 | orchestrator | 2025-03-11 01:21:50.850508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:50.851256 | orchestrator | Tuesday 11 March 2025 01:21:50 +0000 (0:00:00.234) 0:01:02.808 ********* 2025-03-11 01:21:51.063406 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:51.064778 | orchestrator | 2025-03-11 01:21:51.065865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:51.066848 | orchestrator | Tuesday 11 March 2025 01:21:51 +0000 (0:00:00.216) 0:01:03.024 ********* 2025-03-11 01:21:52.014748 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-11 01:21:52.015635 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-11 01:21:52.017056 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-11 01:21:52.020806 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-11 01:21:52.021874 | orchestrator | 2025-03-11 01:21:52.022946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:52.023603 | orchestrator | Tuesday 11 March 2025 01:21:52 +0000 (0:00:00.949) 0:01:03.974 ********* 2025-03-11 01:21:52.229980 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:52.230132 | orchestrator | 2025-03-11 01:21:52.230784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:52.231069 | orchestrator | Tuesday 11 March 2025 01:21:52 +0000 (0:00:00.216) 0:01:04.191 ********* 2025-03-11 01:21:52.679258 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:52.679423 | orchestrator | 2025-03-11 01:21:52.681600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:52.903713 | orchestrator | Tuesday 11 March 2025 01:21:52 +0000 (0:00:00.446) 0:01:04.638 ********* 2025-03-11 01:21:52.903771 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:52.905221 | orchestrator | 2025-03-11 01:21:52.906178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:21:52.906197 | orchestrator | Tuesday 11 March 2025 01:21:52 +0000 (0:00:00.225) 0:01:04.863 ********* 2025-03-11 01:21:53.115187 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:53.115710 | orchestrator | 2025-03-11 01:21:53.116482 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:21:53.117634 | orchestrator | Tuesday 11 March 2025 01:21:53 +0000 (0:00:00.212) 0:01:05.076 ********* 2025-03-11 01:21:53.266491 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:53.267186 | orchestrator | 2025-03-11 01:21:53.267559 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:21:53.268018 | orchestrator | Tuesday 11 March 2025 01:21:53 +0000 (0:00:00.151) 0:01:05.228 ********* 2025-03-11 01:21:53.504004 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cedb9017-cc47-5b88-9282-51f2e5626d00'}}) 2025-03-11 01:21:53.504133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6525db2c-5f0a-5e5f-9376-d59b0a20baba'}}) 2025-03-11 01:21:53.504738 | orchestrator | 2025-03-11 01:21:53.505091 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:21:53.506700 | orchestrator | Tuesday 11 March 2025 01:21:53 +0000 (0:00:00.235) 0:01:05.464 ********* 2025-03-11 01:21:55.607098 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'}) 2025-03-11 01:21:55.607367 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'}) 2025-03-11 01:21:55.607392 | orchestrator | 2025-03-11 01:21:55.607413 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:21:55.607688 | orchestrator | Tuesday 11 March 2025 01:21:55 +0000 (0:00:02.102) 0:01:07.567 ********* 2025-03-11 01:21:55.785600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:55.786731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:55.787870 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:55.788434 | orchestrator | 2025-03-11 01:21:55.789115 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:21:55.789534 | orchestrator | Tuesday 11 March 2025 01:21:55 +0000 (0:00:00.177) 0:01:07.744 ********* 2025-03-11 01:21:57.119364 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'}) 2025-03-11 01:21:57.121989 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'}) 2025-03-11 01:21:57.123264 | orchestrator | 2025-03-11 01:21:57.124964 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:21:57.125640 | orchestrator | Tuesday 11 March 2025 01:21:57 +0000 (0:00:01.332) 0:01:09.077 ********* 2025-03-11 01:21:57.523445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:57.524548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:57.524981 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:57.529243 | orchestrator | 2025-03-11 01:21:57.529701 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:21:57.531979 | orchestrator | Tuesday 11 March 2025 01:21:57 +0000 (0:00:00.406) 0:01:09.484 ********* 2025-03-11 01:21:57.677123 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:57.677507 | orchestrator | 2025-03-11 01:21:57.678612 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:21:57.679215 | orchestrator | Tuesday 11 March 2025 01:21:57 +0000 (0:00:00.153) 0:01:09.638 ********* 2025-03-11 01:21:57.880749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:57.881521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:57.881966 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:57.882294 | orchestrator | 2025-03-11 01:21:57.883101 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:21:57.883406 | orchestrator | Tuesday 11 March 2025 01:21:57 +0000 (0:00:00.201) 0:01:09.839 ********* 2025-03-11 01:21:58.031081 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:58.031564 | orchestrator | 2025-03-11 01:21:58.031649 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:21:58.032115 | orchestrator | Tuesday 11 March 2025 01:21:58 +0000 (0:00:00.152) 0:01:09.992 ********* 2025-03-11 01:21:58.258753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:58.259799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:58.263206 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:58.263633 | orchestrator | 2025-03-11 01:21:58.263671 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:21:58.470639 | orchestrator | Tuesday 11 March 2025 01:21:58 +0000 (0:00:00.227) 0:01:10.220 ********* 2025-03-11 01:21:58.470722 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:58.470784 | orchestrator | 2025-03-11 01:21:58.471325 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:21:58.471762 | orchestrator | Tuesday 11 March 2025 01:21:58 +0000 (0:00:00.211) 0:01:10.431 ********* 2025-03-11 01:21:58.630864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:58.631435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:58.631541 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:58.631599 | orchestrator | 2025-03-11 01:21:58.631669 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:21:58.632024 | orchestrator | Tuesday 11 March 2025 01:21:58 +0000 (0:00:00.158) 0:01:10.590 ********* 2025-03-11 01:21:58.784731 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:21:58.785426 | orchestrator | 2025-03-11 01:21:58.786113 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:21:58.786462 | orchestrator | Tuesday 11 March 2025 01:21:58 +0000 (0:00:00.155) 0:01:10.745 ********* 2025-03-11 01:21:59.011459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:59.012058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:59.012102 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:59.012549 | orchestrator | 2025-03-11 01:21:59.012890 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:21:59.013694 | orchestrator | Tuesday 11 March 2025 01:21:59 +0000 (0:00:00.226) 0:01:10.972 ********* 2025-03-11 01:21:59.180247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:59.180729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:59.181684 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:59.182145 | orchestrator | 2025-03-11 01:21:59.182610 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:21:59.182962 | orchestrator | Tuesday 11 March 2025 01:21:59 +0000 (0:00:00.169) 0:01:11.141 ********* 2025-03-11 01:21:59.368538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:21:59.369628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:21:59.369673 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:59.371126 | orchestrator | 2025-03-11 01:21:59.372197 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:21:59.372925 | orchestrator | Tuesday 11 March 2025 01:21:59 +0000 (0:00:00.186) 0:01:11.328 ********* 2025-03-11 01:21:59.512044 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:59.512736 | orchestrator | 2025-03-11 01:21:59.513751 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:21:59.514461 | orchestrator | Tuesday 11 March 2025 01:21:59 +0000 (0:00:00.145) 0:01:11.473 ********* 2025-03-11 01:21:59.911612 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:21:59.914814 | orchestrator | 2025-03-11 01:21:59.915306 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:21:59.915824 | orchestrator | Tuesday 11 March 2025 01:21:59 +0000 (0:00:00.397) 0:01:11.871 ********* 2025-03-11 01:22:00.065282 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:00.065561 | orchestrator | 2025-03-11 01:22:00.066113 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:22:00.066791 | orchestrator | Tuesday 11 March 2025 01:22:00 +0000 (0:00:00.155) 0:01:12.026 ********* 2025-03-11 01:22:00.213296 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:22:00.213751 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:22:00.213794 | orchestrator | } 2025-03-11 01:22:00.214106 | orchestrator | 2025-03-11 01:22:00.214603 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:22:00.215268 | orchestrator | Tuesday 11 March 2025 01:22:00 +0000 (0:00:00.147) 0:01:12.174 ********* 2025-03-11 01:22:00.362888 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:22:00.363701 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:22:00.364130 | orchestrator | } 2025-03-11 01:22:00.364482 | orchestrator | 2025-03-11 01:22:00.364711 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:22:00.365332 | orchestrator | Tuesday 11 March 2025 01:22:00 +0000 (0:00:00.150) 0:01:12.324 ********* 2025-03-11 01:22:00.527752 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:22:00.528023 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:22:00.528896 | orchestrator | } 2025-03-11 01:22:00.530132 | orchestrator | 2025-03-11 01:22:00.530617 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:22:00.531010 | orchestrator | Tuesday 11 March 2025 01:22:00 +0000 (0:00:00.164) 0:01:12.488 ********* 2025-03-11 01:22:01.084633 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:01.086010 | orchestrator | 2025-03-11 01:22:01.086728 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:22:01.089379 | orchestrator | Tuesday 11 March 2025 01:22:01 +0000 (0:00:00.555) 0:01:13.044 ********* 2025-03-11 01:22:01.614666 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:01.615701 | orchestrator | 2025-03-11 01:22:01.616553 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:22:01.616952 | orchestrator | Tuesday 11 March 2025 01:22:01 +0000 (0:00:00.531) 0:01:13.576 ********* 2025-03-11 01:22:02.120370 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:02.124192 | orchestrator | 2025-03-11 01:22:02.126647 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:22:02.127836 | orchestrator | Tuesday 11 March 2025 01:22:02 +0000 (0:00:00.501) 0:01:14.078 ********* 2025-03-11 01:22:02.271608 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:02.272622 | orchestrator | 2025-03-11 01:22:02.273120 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:22:02.393001 | orchestrator | Tuesday 11 March 2025 01:22:02 +0000 (0:00:00.155) 0:01:14.233 ********* 2025-03-11 01:22:02.393077 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:02.393761 | orchestrator | 2025-03-11 01:22:02.394626 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:22:02.395987 | orchestrator | Tuesday 11 March 2025 01:22:02 +0000 (0:00:00.121) 0:01:14.355 ********* 2025-03-11 01:22:02.512230 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:02.513555 | orchestrator | 2025-03-11 01:22:02.904697 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:22:02.904776 | orchestrator | Tuesday 11 March 2025 01:22:02 +0000 (0:00:00.119) 0:01:14.474 ********* 2025-03-11 01:22:02.904817 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:22:02.905083 | orchestrator |  "vgs_report": { 2025-03-11 01:22:02.905911 | orchestrator |  "vg": [] 2025-03-11 01:22:02.906467 | orchestrator |  } 2025-03-11 01:22:02.907309 | orchestrator | } 2025-03-11 01:22:02.908101 | orchestrator | 2025-03-11 01:22:02.910534 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:22:03.046886 | orchestrator | Tuesday 11 March 2025 01:22:02 +0000 (0:00:00.391) 0:01:14.865 ********* 2025-03-11 01:22:03.046940 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.047034 | orchestrator | 2025-03-11 01:22:03.048592 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:22:03.048702 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.143) 0:01:15.008 ********* 2025-03-11 01:22:03.197049 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.197167 | orchestrator | 2025-03-11 01:22:03.197614 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:22:03.197687 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.149) 0:01:15.158 ********* 2025-03-11 01:22:03.337098 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.337237 | orchestrator | 2025-03-11 01:22:03.337833 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:22:03.338058 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.140) 0:01:15.298 ********* 2025-03-11 01:22:03.482006 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.483104 | orchestrator | 2025-03-11 01:22:03.484305 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:22:03.485592 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.143) 0:01:15.442 ********* 2025-03-11 01:22:03.649470 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.649983 | orchestrator | 2025-03-11 01:22:03.650421 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:22:03.651044 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.168) 0:01:15.611 ********* 2025-03-11 01:22:03.805418 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.933741 | orchestrator | 2025-03-11 01:22:03.933780 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:22:03.933796 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.153) 0:01:15.765 ********* 2025-03-11 01:22:03.933817 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:03.935186 | orchestrator | 2025-03-11 01:22:03.937105 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:22:03.937316 | orchestrator | Tuesday 11 March 2025 01:22:03 +0000 (0:00:00.129) 0:01:15.894 ********* 2025-03-11 01:22:04.120776 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:04.120962 | orchestrator | 2025-03-11 01:22:04.121627 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:22:04.122360 | orchestrator | Tuesday 11 March 2025 01:22:04 +0000 (0:00:00.186) 0:01:16.081 ********* 2025-03-11 01:22:04.259846 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:04.260112 | orchestrator | 2025-03-11 01:22:04.261172 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:22:04.262191 | orchestrator | Tuesday 11 March 2025 01:22:04 +0000 (0:00:00.139) 0:01:16.221 ********* 2025-03-11 01:22:04.401649 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:04.403055 | orchestrator | 2025-03-11 01:22:04.405334 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:22:04.405954 | orchestrator | Tuesday 11 March 2025 01:22:04 +0000 (0:00:00.140) 0:01:16.362 ********* 2025-03-11 01:22:04.556452 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:04.556687 | orchestrator | 2025-03-11 01:22:04.557556 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:22:04.558444 | orchestrator | Tuesday 11 March 2025 01:22:04 +0000 (0:00:00.155) 0:01:16.517 ********* 2025-03-11 01:22:04.934801 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:04.935105 | orchestrator | 2025-03-11 01:22:04.935783 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:22:04.936905 | orchestrator | Tuesday 11 March 2025 01:22:04 +0000 (0:00:00.376) 0:01:16.893 ********* 2025-03-11 01:22:05.103107 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.103617 | orchestrator | 2025-03-11 01:22:05.104015 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:22:05.104505 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.170) 0:01:17.064 ********* 2025-03-11 01:22:05.247921 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.249808 | orchestrator | 2025-03-11 01:22:05.250407 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:22:05.251107 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.143) 0:01:17.208 ********* 2025-03-11 01:22:05.422240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:05.423259 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:05.426528 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.427556 | orchestrator | 2025-03-11 01:22:05.428553 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:22:05.429662 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.173) 0:01:17.382 ********* 2025-03-11 01:22:05.616616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:05.616934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:05.618142 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.619127 | orchestrator | 2025-03-11 01:22:05.619822 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:22:05.620415 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.195) 0:01:17.577 ********* 2025-03-11 01:22:05.803902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:05.804064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:05.804754 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.805004 | orchestrator | 2025-03-11 01:22:05.805496 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:22:05.806125 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.187) 0:01:17.765 ********* 2025-03-11 01:22:05.972029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:05.972765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:05.972801 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:05.973136 | orchestrator | 2025-03-11 01:22:05.973516 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:22:05.973846 | orchestrator | Tuesday 11 March 2025 01:22:05 +0000 (0:00:00.166) 0:01:17.932 ********* 2025-03-11 01:22:06.141749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:06.142443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:06.142549 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:06.142563 | orchestrator | 2025-03-11 01:22:06.142598 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:22:06.142908 | orchestrator | Tuesday 11 March 2025 01:22:06 +0000 (0:00:00.168) 0:01:18.101 ********* 2025-03-11 01:22:06.308506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:06.308965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:06.312786 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:06.313341 | orchestrator | 2025-03-11 01:22:06.313383 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:22:06.313847 | orchestrator | Tuesday 11 March 2025 01:22:06 +0000 (0:00:00.167) 0:01:18.268 ********* 2025-03-11 01:22:06.510677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:06.512053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:06.513931 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:06.514537 | orchestrator | 2025-03-11 01:22:06.515325 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:22:06.516338 | orchestrator | Tuesday 11 March 2025 01:22:06 +0000 (0:00:00.202) 0:01:18.471 ********* 2025-03-11 01:22:06.681950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:06.682188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:06.682803 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:06.683599 | orchestrator | 2025-03-11 01:22:06.684149 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:22:06.684738 | orchestrator | Tuesday 11 March 2025 01:22:06 +0000 (0:00:00.172) 0:01:18.643 ********* 2025-03-11 01:22:07.469260 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:07.470350 | orchestrator | 2025-03-11 01:22:07.471476 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:22:08.007859 | orchestrator | Tuesday 11 March 2025 01:22:07 +0000 (0:00:00.783) 0:01:19.426 ********* 2025-03-11 01:22:08.007987 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:08.008559 | orchestrator | 2025-03-11 01:22:08.008625 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:22:08.009225 | orchestrator | Tuesday 11 March 2025 01:22:07 +0000 (0:00:00.540) 0:01:19.967 ********* 2025-03-11 01:22:08.188740 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:08.188879 | orchestrator | 2025-03-11 01:22:08.190057 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:22:08.190549 | orchestrator | Tuesday 11 March 2025 01:22:08 +0000 (0:00:00.180) 0:01:20.147 ********* 2025-03-11 01:22:08.371391 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'vg_name': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'}) 2025-03-11 01:22:08.372285 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'vg_name': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'}) 2025-03-11 01:22:08.372992 | orchestrator | 2025-03-11 01:22:08.373821 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:22:08.374135 | orchestrator | Tuesday 11 March 2025 01:22:08 +0000 (0:00:00.184) 0:01:20.331 ********* 2025-03-11 01:22:08.578395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:08.579741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:08.581335 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:08.582863 | orchestrator | 2025-03-11 01:22:08.584300 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:22:08.585711 | orchestrator | Tuesday 11 March 2025 01:22:08 +0000 (0:00:00.207) 0:01:20.539 ********* 2025-03-11 01:22:08.804349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:08.805473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:08.806316 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:08.806778 | orchestrator | 2025-03-11 01:22:08.807781 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:22:08.808587 | orchestrator | Tuesday 11 March 2025 01:22:08 +0000 (0:00:00.224) 0:01:20.763 ********* 2025-03-11 01:22:08.984069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00', 'data_vg': 'ceph-cedb9017-cc47-5b88-9282-51f2e5626d00'})  2025-03-11 01:22:08.984269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba', 'data_vg': 'ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba'})  2025-03-11 01:22:08.984523 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:08.985345 | orchestrator | 2025-03-11 01:22:08.985452 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:22:08.986100 | orchestrator | Tuesday 11 March 2025 01:22:08 +0000 (0:00:00.181) 0:01:20.945 ********* 2025-03-11 01:22:09.661748 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:22:09.662239 | orchestrator |  "lvm_report": { 2025-03-11 01:22:09.662289 | orchestrator |  "lv": [ 2025-03-11 01:22:09.662793 | orchestrator |  { 2025-03-11 01:22:09.663376 | orchestrator |  "lv_name": "osd-block-6525db2c-5f0a-5e5f-9376-d59b0a20baba", 2025-03-11 01:22:09.664065 | orchestrator |  "vg_name": "ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba" 2025-03-11 01:22:09.665107 | orchestrator |  }, 2025-03-11 01:22:09.665600 | orchestrator |  { 2025-03-11 01:22:09.666358 | orchestrator |  "lv_name": "osd-block-cedb9017-cc47-5b88-9282-51f2e5626d00", 2025-03-11 01:22:09.666591 | orchestrator |  "vg_name": "ceph-cedb9017-cc47-5b88-9282-51f2e5626d00" 2025-03-11 01:22:09.667099 | orchestrator |  } 2025-03-11 01:22:09.667816 | orchestrator |  ], 2025-03-11 01:22:09.668370 | orchestrator |  "pv": [ 2025-03-11 01:22:09.668857 | orchestrator |  { 2025-03-11 01:22:09.669350 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:22:09.669861 | orchestrator |  "vg_name": "ceph-cedb9017-cc47-5b88-9282-51f2e5626d00" 2025-03-11 01:22:09.670610 | orchestrator |  }, 2025-03-11 01:22:09.671039 | orchestrator |  { 2025-03-11 01:22:09.671414 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:22:09.672051 | orchestrator |  "vg_name": "ceph-6525db2c-5f0a-5e5f-9376-d59b0a20baba" 2025-03-11 01:22:09.672529 | orchestrator |  } 2025-03-11 01:22:09.673881 | orchestrator |  ] 2025-03-11 01:22:09.674903 | orchestrator |  } 2025-03-11 01:22:09.675497 | orchestrator | } 2025-03-11 01:22:09.675946 | orchestrator | 2025-03-11 01:22:09.677040 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:22:09.677131 | orchestrator | 2025-03-11 01:22:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 01:22:09.677952 | orchestrator | 2025-03-11 01:22:09 | INFO  | Please wait and do not abort execution. 2025-03-11 01:22:09.677986 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:22:09.678628 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:22:09.679026 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:22:09.679728 | orchestrator | 2025-03-11 01:22:09.680385 | orchestrator | 2025-03-11 01:22:09.680856 | orchestrator | 2025-03-11 01:22:09.681665 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:22:09.682162 | orchestrator | Tuesday 11 March 2025 01:22:09 +0000 (0:00:00.678) 0:01:21.623 ********* 2025-03-11 01:22:09.682344 | orchestrator | =============================================================================== 2025-03-11 01:22:09.683144 | orchestrator | Create block VGs -------------------------------------------------------- 6.90s 2025-03-11 01:22:09.683595 | orchestrator | Create block LVs -------------------------------------------------------- 4.24s 2025-03-11 01:22:09.684620 | orchestrator | Print LVM report data --------------------------------------------------- 2.41s 2025-03-11 01:22:09.684825 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.02s 2025-03-11 01:22:09.684855 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.94s 2025-03-11 01:22:09.685231 | orchestrator | Add known partitions to the list of available block devices ------------- 1.85s 2025-03-11 01:22:09.685621 | orchestrator | Add known links to the list of available block devices ------------------ 1.81s 2025-03-11 01:22:09.685879 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2025-03-11 01:22:09.686238 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2025-03-11 01:22:09.686723 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2025-03-11 01:22:09.687113 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.18s 2025-03-11 01:22:09.687352 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2025-03-11 01:22:09.687849 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-03-11 01:22:09.688009 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.81s 2025-03-11 01:22:09.688537 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.79s 2025-03-11 01:22:09.688902 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-03-11 01:22:09.689384 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-03-11 01:22:09.689637 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-03-11 01:22:09.689785 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.73s 2025-03-11 01:22:09.690115 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.72s 2025-03-11 01:22:11.940159 | orchestrator | 2025-03-11 01:22:11 | INFO  | Task a32d15fe-49e5-48a4-90b4-4448286ec383 (facts) was prepared for execution. 2025-03-11 01:22:15.485388 | orchestrator | 2025-03-11 01:22:11 | INFO  | It takes a moment until task a32d15fe-49e5-48a4-90b4-4448286ec383 (facts) has been started and output is visible here. 2025-03-11 01:22:15.485541 | orchestrator | 2025-03-11 01:22:15.487120 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-11 01:22:15.489068 | orchestrator | 2025-03-11 01:22:15.489693 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-11 01:22:15.490455 | orchestrator | Tuesday 11 March 2025 01:22:15 +0000 (0:00:00.241) 0:00:00.241 ********* 2025-03-11 01:22:17.088282 | orchestrator | ok: [testbed-manager] 2025-03-11 01:22:17.088438 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:22:17.088984 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:22:17.089496 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:22:17.089825 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:22:17.091082 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:17.091538 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:22:17.091985 | orchestrator | 2025-03-11 01:22:17.092743 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-11 01:22:17.093019 | orchestrator | Tuesday 11 March 2025 01:22:17 +0000 (0:00:01.604) 0:00:01.845 ********* 2025-03-11 01:22:17.251301 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:22:17.338611 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:22:17.430404 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:22:17.519107 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:22:17.601737 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:22:18.448083 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:22:18.448990 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:18.449031 | orchestrator | 2025-03-11 01:22:18.450400 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 01:22:18.453686 | orchestrator | 2025-03-11 01:22:18.454445 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 01:22:18.454985 | orchestrator | Tuesday 11 March 2025 01:22:18 +0000 (0:00:01.362) 0:00:03.208 ********* 2025-03-11 01:22:23.388487 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:22:23.388708 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:22:23.388744 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:22:23.388767 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:22:23.389707 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:22:23.390468 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:22:23.391254 | orchestrator | ok: [testbed-manager] 2025-03-11 01:22:23.391502 | orchestrator | 2025-03-11 01:22:23.391819 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-11 01:22:23.392088 | orchestrator | 2025-03-11 01:22:23.392444 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-11 01:22:23.393336 | orchestrator | Tuesday 11 March 2025 01:22:23 +0000 (0:00:04.941) 0:00:08.149 ********* 2025-03-11 01:22:23.772177 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:22:23.871878 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:22:23.963815 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:22:24.059930 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:22:24.146324 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:22:24.179467 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:22:24.180111 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:22:24.180151 | orchestrator | 2025-03-11 01:22:24.180791 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:22:24.181156 | orchestrator | 2025-03-11 01:22:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 01:22:24.183470 | orchestrator | 2025-03-11 01:22:24 | INFO  | Please wait and do not abort execution. 2025-03-11 01:22:24.183537 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.184019 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.184719 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.185107 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.185430 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.185899 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.187079 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:22:24.187793 | orchestrator | 2025-03-11 01:22:24.187868 | orchestrator | Tuesday 11 March 2025 01:22:24 +0000 (0:00:00.792) 0:00:08.941 ********* 2025-03-11 01:22:24.188641 | orchestrator | =============================================================================== 2025-03-11 01:22:24.188713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-03-11 01:22:24.189399 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.60s 2025-03-11 01:22:24.189878 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2025-03-11 01:22:24.190257 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.79s 2025-03-11 01:22:24.883965 | orchestrator | 2025-03-11 01:22:24.885602 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 11 01:22:24 UTC 2025 2025-03-11 01:22:26.671270 | orchestrator | 2025-03-11 01:22:26.671398 | orchestrator | 2025-03-11 01:22:26 | INFO  | Collection nutshell is prepared for execution 2025-03-11 01:22:26.676535 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [0] - dotfiles 2025-03-11 01:22:26.676612 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [0] - homer 2025-03-11 01:22:26.678336 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [0] - netdata 2025-03-11 01:22:26.678368 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [0] - openstackclient 2025-03-11 01:22:26.678383 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [0] - phpmyadmin 2025-03-11 01:22:26.678397 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [0] - common 2025-03-11 01:22:26.678418 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [1] -- loadbalancer 2025-03-11 01:22:26.678683 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [2] --- opensearch 2025-03-11 01:22:26.678712 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [2] --- mariadb-ng 2025-03-11 01:22:26.678727 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [3] ---- horizon 2025-03-11 01:22:26.678741 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [3] ---- keystone 2025-03-11 01:22:26.678755 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [4] ----- neutron 2025-03-11 01:22:26.678792 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ wait-for-nova 2025-03-11 01:22:26.678817 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [5] ------ octavia 2025-03-11 01:22:26.679166 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- barbican 2025-03-11 01:22:26.679505 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- designate 2025-03-11 01:22:26.679530 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- ironic 2025-03-11 01:22:26.679545 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- placement 2025-03-11 01:22:26.679559 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- magnum 2025-03-11 01:22:26.679633 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [1] -- openvswitch 2025-03-11 01:22:26.679656 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [2] --- ovn 2025-03-11 01:22:26.679719 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [1] -- memcached 2025-03-11 01:22:26.679767 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [1] -- redis 2025-03-11 01:22:26.679782 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [1] -- rabbitmq-ng 2025-03-11 01:22:26.679800 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [0] - kubernetes 2025-03-11 01:22:26.679862 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [1] -- kubeconfig 2025-03-11 01:22:26.679889 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [1] -- copy-kubeconfig 2025-03-11 01:22:26.679920 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [0] - ceph 2025-03-11 01:22:26.681459 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [1] -- ceph-pools 2025-03-11 01:22:26.681564 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [2] --- copy-ceph-keys 2025-03-11 01:22:26.681618 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [3] ---- cephclient 2025-03-11 01:22:26.681633 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-11 01:22:26.681652 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [4] ----- wait-for-keystone 2025-03-11 01:22:26.681882 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-11 01:22:26.681908 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ glance 2025-03-11 01:22:26.681927 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ cinder 2025-03-11 01:22:26.681961 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ nova 2025-03-11 01:22:26.682093 | orchestrator | 2025-03-11 01:22:26 | INFO  | A [4] ----- prometheus 2025-03-11 01:22:26.682125 | orchestrator | 2025-03-11 01:22:26 | INFO  | D [5] ------ grafana 2025-03-11 01:22:26.834428 | orchestrator | 2025-03-11 01:22:26 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-11 01:22:29.252993 | orchestrator | 2025-03-11 01:22:26 | INFO  | Tasks are running in the background 2025-03-11 01:22:29.253110 | orchestrator | 2025-03-11 01:22:29 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-11 01:22:31.350525 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:31.350969 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:31.351761 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:31.352425 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:31.353178 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:31.358281 | orchestrator | 2025-03-11 01:22:31 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:34.413984 | orchestrator | 2025-03-11 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:34.414198 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:34.414283 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:34.414307 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:34.416195 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:34.416638 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:34.418650 | orchestrator | 2025-03-11 01:22:34 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:37.486722 | orchestrator | 2025-03-11 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:37.486844 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:37.492187 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:37.493955 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:37.502489 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:37.505694 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:37.510534 | orchestrator | 2025-03-11 01:22:37 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:40.601573 | orchestrator | 2025-03-11 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:40.601725 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:40.603806 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:40.609969 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:40.614779 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:40.615821 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:40.615866 | orchestrator | 2025-03-11 01:22:40 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:40.617694 | orchestrator | 2025-03-11 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:43.684133 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:43.686974 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:43.689450 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:43.697315 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:43.697757 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:43.700756 | orchestrator | 2025-03-11 01:22:43 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:46.761388 | orchestrator | 2025-03-11 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:46.761526 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:46.762202 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:46.762724 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:46.764025 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:46.764389 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:46.769837 | orchestrator | 2025-03-11 01:22:46 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:49.849906 | orchestrator | 2025-03-11 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:49.850078 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:49.851281 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:49.856440 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:49.857671 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:49.861250 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state STARTED 2025-03-11 01:22:49.867055 | orchestrator | 2025-03-11 01:22:49 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:52.948330 | orchestrator | 2025-03-11 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:52.948500 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:52.952530 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:52.952619 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:52.955794 | orchestrator | 2025-03-11 01:22:52.955842 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-11 01:22:52.955858 | orchestrator | 2025-03-11 01:22:52.955873 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-11 01:22:52.955887 | orchestrator | Tuesday 11 March 2025 01:22:36 +0000 (0:00:00.433) 0:00:00.433 ********* 2025-03-11 01:22:52.955901 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:22:52.955917 | orchestrator | changed: [testbed-manager] 2025-03-11 01:22:52.955931 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:22:52.955945 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:22:52.955959 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:22:52.955973 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:22:52.955987 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:22:52.956001 | orchestrator | 2025-03-11 01:22:52.956015 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-11 01:22:52.956029 | orchestrator | Tuesday 11 March 2025 01:22:41 +0000 (0:00:04.843) 0:00:05.277 ********* 2025-03-11 01:22:52.956043 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-11 01:22:52.956064 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-11 01:22:52.956079 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-11 01:22:52.956116 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-11 01:22:52.956131 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-11 01:22:52.956145 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-11 01:22:52.956159 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-11 01:22:52.956173 | orchestrator | 2025-03-11 01:22:52.956186 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-11 01:22:52.956207 | orchestrator | Tuesday 11 March 2025 01:22:44 +0000 (0:00:02.815) 0:00:08.092 ********* 2025-03-11 01:22:52.956223 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:42.379879', 'end': '2025-03-11 01:22:42.389994', 'delta': '0:00:00.010115', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956246 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:43.362918', 'end': '2025-03-11 01:22:43.371638', 'delta': '0:00:00.008720', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956262 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:42.766750', 'end': '2025-03-11 01:22:42.773041', 'delta': '0:00:00.006291', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956302 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:42.383599', 'end': '2025-03-11 01:22:42.389002', 'delta': '0:00:00.005403', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956318 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:43.705183', 'end': '2025-03-11 01:22:43.712926', 'delta': '0:00:00.007743', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956340 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:43.972409', 'end': '2025-03-11 01:22:43.978080', 'delta': '0:00:00.005671', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956360 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:22:44.192524', 'end': '2025-03-11 01:22:44.202534', 'delta': '0:00:00.010010', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:22:52.956375 | orchestrator | 2025-03-11 01:22:52.956389 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-11 01:22:52.956405 | orchestrator | Tuesday 11 March 2025 01:22:47 +0000 (0:00:02.561) 0:00:10.654 ********* 2025-03-11 01:22:52.956420 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-11 01:22:52.956437 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-11 01:22:52.956453 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-11 01:22:52.956468 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-11 01:22:52.956483 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-11 01:22:52.956499 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-11 01:22:52.956514 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-11 01:22:52.956538 | orchestrator | 2025-03-11 01:22:52.956562 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:22:52.956614 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956641 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956667 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956707 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956776 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956803 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956828 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:22:52.956851 | orchestrator | 2025-03-11 01:22:52.956872 | orchestrator | Tuesday 11 March 2025 01:22:51 +0000 (0:00:04.115) 0:00:14.770 ********* 2025-03-11 01:22:52.956887 | orchestrator | =============================================================================== 2025-03-11 01:22:52.956901 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.84s 2025-03-11 01:22:52.956915 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.12s 2025-03-11 01:22:52.956929 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.82s 2025-03-11 01:22:52.956944 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.56s 2025-03-11 01:22:52.956963 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:56.026279 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task 575135cc-51e5-48c3-ba4f-675a8233b962 is in state SUCCESS 2025-03-11 01:22:56.026398 | orchestrator | 2025-03-11 01:22:52 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:56.026417 | orchestrator | 2025-03-11 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:56.026450 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:56.041381 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:56.042812 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:56.042845 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:22:56.042867 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:56.047177 | orchestrator | 2025-03-11 01:22:56 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:22:59.165731 | orchestrator | 2025-03-11 01:22:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:22:59.165859 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:22:59.176397 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:22:59.185760 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:22:59.191714 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:22:59.200017 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:22:59.207069 | orchestrator | 2025-03-11 01:22:59 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:02.346218 | orchestrator | 2025-03-11 01:22:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:02.346381 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:02.358267 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:02.362781 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:02.375254 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:02.382859 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:23:02.391757 | orchestrator | 2025-03-11 01:23:02 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:05.554557 | orchestrator | 2025-03-11 01:23:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:05.554720 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:05.565929 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:05.565969 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:08.663048 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:08.663185 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:23:08.663218 | orchestrator | 2025-03-11 01:23:05 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:08.663234 | orchestrator | 2025-03-11 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:08.663261 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:08.667832 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:08.668007 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:08.669292 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:08.679372 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:23:11.775427 | orchestrator | 2025-03-11 01:23:08 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:11.775526 | orchestrator | 2025-03-11 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:11.775558 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:11.776009 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:11.776124 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:11.779623 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:11.784089 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:23:11.786985 | orchestrator | 2025-03-11 01:23:11 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:14.881954 | orchestrator | 2025-03-11 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:14.882153 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:14.888057 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:14.888118 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:14.897274 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:14.899130 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state STARTED 2025-03-11 01:23:14.905202 | orchestrator | 2025-03-11 01:23:14 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:14.908147 | orchestrator | 2025-03-11 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:17.984651 | orchestrator | 2025-03-11 01:23:17 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:17.996718 | orchestrator | 2025-03-11 01:23:17 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:18.006905 | orchestrator | 2025-03-11 01:23:18 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:18.008501 | orchestrator | 2025-03-11 01:23:18 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:18.011466 | orchestrator | 2025-03-11 01:23:18 | INFO  | Task 93f09fc3-6aca-4296-ab3b-338feadda8cd is in state SUCCESS 2025-03-11 01:23:18.011513 | orchestrator | 2025-03-11 01:23:18 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:21.091868 | orchestrator | 2025-03-11 01:23:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:21.091982 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:21.096006 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:21.097824 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:21.102454 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:21.106249 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:21.107536 | orchestrator | 2025-03-11 01:23:21 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:24.189086 | orchestrator | 2025-03-11 01:23:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:24.189210 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:24.189754 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:24.189791 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:24.189808 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:24.189832 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:24.199315 | orchestrator | 2025-03-11 01:23:24 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:27.296821 | orchestrator | 2025-03-11 01:23:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:27.296949 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:27.301252 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:27.302261 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:27.302298 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:27.302320 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:27.305230 | orchestrator | 2025-03-11 01:23:27 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:30.381938 | orchestrator | 2025-03-11 01:23:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:30.382096 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:33.495292 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:33.495417 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:33.495436 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:33.495451 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:33.495465 | orchestrator | 2025-03-11 01:23:30 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:33.495480 | orchestrator | 2025-03-11 01:23:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:33.495510 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:33.503487 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:33.506968 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:33.511983 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:33.521251 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:33.529231 | orchestrator | 2025-03-11 01:23:33 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:33.529830 | orchestrator | 2025-03-11 01:23:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:36.600995 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:36.605962 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:36.606069 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:36.616089 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:36.616159 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:36.618284 | orchestrator | 2025-03-11 01:23:36 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:39.772397 | orchestrator | 2025-03-11 01:23:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:39.772529 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:39.772907 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:39.772944 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:39.773459 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:39.777031 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state STARTED 2025-03-11 01:23:39.779881 | orchestrator | 2025-03-11 01:23:39 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:42.878278 | orchestrator | 2025-03-11 01:23:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:42.878406 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:42.884945 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:42.888787 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:42.891928 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:42.892502 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task 3df00205-a5be-416f-9d9c-d815fe4a1de1 is in state SUCCESS 2025-03-11 01:23:42.904270 | orchestrator | 2025-03-11 01:23:42 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:45.961132 | orchestrator | 2025-03-11 01:23:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:45.961254 | orchestrator | 2025-03-11 01:23:45 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:49.062351 | orchestrator | 2025-03-11 01:23:45 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:49.062461 | orchestrator | 2025-03-11 01:23:45 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:49.062499 | orchestrator | 2025-03-11 01:23:45 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:49.062516 | orchestrator | 2025-03-11 01:23:45 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:49.062531 | orchestrator | 2025-03-11 01:23:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:49.062561 | orchestrator | 2025-03-11 01:23:49 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:49.073403 | orchestrator | 2025-03-11 01:23:49 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:49.087243 | orchestrator | 2025-03-11 01:23:49 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:49.101219 | orchestrator | 2025-03-11 01:23:49 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:49.118271 | orchestrator | 2025-03-11 01:23:49 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:52.218727 | orchestrator | 2025-03-11 01:23:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:52.218863 | orchestrator | 2025-03-11 01:23:52 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:55.289447 | orchestrator | 2025-03-11 01:23:52 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:55.289547 | orchestrator | 2025-03-11 01:23:52 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:55.289566 | orchestrator | 2025-03-11 01:23:52 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:55.289666 | orchestrator | 2025-03-11 01:23:52 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:55.289729 | orchestrator | 2025-03-11 01:23:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:55.289760 | orchestrator | 2025-03-11 01:23:55 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:55.290149 | orchestrator | 2025-03-11 01:23:55 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:55.294087 | orchestrator | 2025-03-11 01:23:55 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:55.294558 | orchestrator | 2025-03-11 01:23:55 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:23:55.294622 | orchestrator | 2025-03-11 01:23:55 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:23:58.373021 | orchestrator | 2025-03-11 01:23:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:23:58.373139 | orchestrator | 2025-03-11 01:23:58 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:23:58.374762 | orchestrator | 2025-03-11 01:23:58 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:23:58.377978 | orchestrator | 2025-03-11 01:23:58 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:23:58.383849 | orchestrator | 2025-03-11 01:23:58 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:24:01.489017 | orchestrator | 2025-03-11 01:23:58 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:01.489119 | orchestrator | 2025-03-11 01:23:58 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:01.489153 | orchestrator | 2025-03-11 01:24:01 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:01.491266 | orchestrator | 2025-03-11 01:24:01 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:24:01.494272 | orchestrator | 2025-03-11 01:24:01 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:01.498167 | orchestrator | 2025-03-11 01:24:01 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:24:01.504891 | orchestrator | 2025-03-11 01:24:01 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:04.577919 | orchestrator | 2025-03-11 01:24:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:04.578112 | orchestrator | 2025-03-11 01:24:04 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:04.578321 | orchestrator | 2025-03-11 01:24:04 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state STARTED 2025-03-11 01:24:04.578349 | orchestrator | 2025-03-11 01:24:04 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:04.578984 | orchestrator | 2025-03-11 01:24:04 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state STARTED 2025-03-11 01:24:04.579255 | orchestrator | 2025-03-11 01:24:04 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:07.617342 | orchestrator | 2025-03-11 01:24:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:07.617506 | orchestrator | 2025-03-11 01:24:07 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:07.617622 | orchestrator | 2025-03-11 01:24:07 | INFO  | Task d1e7569e-fca7-4095-8cb8-7697b961bbbd is in state SUCCESS 2025-03-11 01:24:07.618995 | orchestrator | 2025-03-11 01:24:07.619041 | orchestrator | 2025-03-11 01:24:07.619055 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-11 01:24:07.619070 | orchestrator | 2025-03-11 01:24:07.619085 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-11 01:24:07.619100 | orchestrator | Tuesday 11 March 2025 01:22:37 +0000 (0:00:00.526) 0:00:00.526 ********* 2025-03-11 01:24:07.619115 | orchestrator | ok: [testbed-manager] => { 2025-03-11 01:24:07.619131 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-11 01:24:07.619147 | orchestrator | } 2025-03-11 01:24:07.619161 | orchestrator | 2025-03-11 01:24:07.619175 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-11 01:24:07.619189 | orchestrator | Tuesday 11 March 2025 01:22:38 +0000 (0:00:00.776) 0:00:01.302 ********* 2025-03-11 01:24:07.619203 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.619218 | orchestrator | 2025-03-11 01:24:07.619232 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-11 01:24:07.619246 | orchestrator | Tuesday 11 March 2025 01:22:40 +0000 (0:00:02.269) 0:00:03.572 ********* 2025-03-11 01:24:07.619259 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-11 01:24:07.619273 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-11 01:24:07.619287 | orchestrator | 2025-03-11 01:24:07.619301 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-11 01:24:07.619315 | orchestrator | Tuesday 11 March 2025 01:22:42 +0000 (0:00:01.606) 0:00:05.179 ********* 2025-03-11 01:24:07.619329 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.619343 | orchestrator | 2025-03-11 01:24:07.619357 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-11 01:24:07.619370 | orchestrator | Tuesday 11 March 2025 01:22:45 +0000 (0:00:02.720) 0:00:07.901 ********* 2025-03-11 01:24:07.619384 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.619398 | orchestrator | 2025-03-11 01:24:07.619411 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-11 01:24:07.619425 | orchestrator | Tuesday 11 March 2025 01:22:46 +0000 (0:00:01.738) 0:00:09.639 ********* 2025-03-11 01:24:07.619439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-11 01:24:07.619453 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.619467 | orchestrator | 2025-03-11 01:24:07.619481 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-11 01:24:07.619495 | orchestrator | Tuesday 11 March 2025 01:23:13 +0000 (0:00:26.399) 0:00:36.038 ********* 2025-03-11 01:24:07.619508 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.619522 | orchestrator | 2025-03-11 01:24:07.619537 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:24:07.619551 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.619569 | orchestrator | 2025-03-11 01:24:07.619609 | orchestrator | Tuesday 11 March 2025 01:23:16 +0000 (0:00:03.850) 0:00:39.889 ********* 2025-03-11 01:24:07.619626 | orchestrator | =============================================================================== 2025-03-11 01:24:07.619641 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.40s 2025-03-11 01:24:07.619658 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.85s 2025-03-11 01:24:07.619673 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.72s 2025-03-11 01:24:07.619689 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.27s 2025-03-11 01:24:07.619705 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.74s 2025-03-11 01:24:07.619726 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.61s 2025-03-11 01:24:07.619756 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.78s 2025-03-11 01:24:07.619772 | orchestrator | 2025-03-11 01:24:07.619787 | orchestrator | 2025-03-11 01:24:07.619803 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-11 01:24:07.619820 | orchestrator | 2025-03-11 01:24:07.619836 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-11 01:24:07.619852 | orchestrator | Tuesday 11 March 2025 01:22:37 +0000 (0:00:00.821) 0:00:00.821 ********* 2025-03-11 01:24:07.619868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-11 01:24:07.619886 | orchestrator | 2025-03-11 01:24:07.619902 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-11 01:24:07.619918 | orchestrator | Tuesday 11 March 2025 01:22:38 +0000 (0:00:01.379) 0:00:02.201 ********* 2025-03-11 01:24:07.619931 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-11 01:24:07.619946 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-11 01:24:07.619959 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-11 01:24:07.619974 | orchestrator | 2025-03-11 01:24:07.619988 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-11 01:24:07.620001 | orchestrator | Tuesday 11 March 2025 01:22:41 +0000 (0:00:02.248) 0:00:04.449 ********* 2025-03-11 01:24:07.620015 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.620029 | orchestrator | 2025-03-11 01:24:07.620043 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-11 01:24:07.620058 | orchestrator | Tuesday 11 March 2025 01:22:43 +0000 (0:00:02.473) 0:00:06.923 ********* 2025-03-11 01:24:07.620072 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-11 01:24:07.620086 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.620100 | orchestrator | 2025-03-11 01:24:07.620123 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-11 01:24:07.620139 | orchestrator | Tuesday 11 March 2025 01:23:24 +0000 (0:00:40.749) 0:00:47.673 ********* 2025-03-11 01:24:07.620153 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.620167 | orchestrator | 2025-03-11 01:24:07.620181 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-11 01:24:07.620194 | orchestrator | Tuesday 11 March 2025 01:23:26 +0000 (0:00:01.849) 0:00:49.523 ********* 2025-03-11 01:24:07.620208 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.620222 | orchestrator | 2025-03-11 01:24:07.620236 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-11 01:24:07.620250 | orchestrator | Tuesday 11 March 2025 01:23:27 +0000 (0:00:01.253) 0:00:50.776 ********* 2025-03-11 01:24:07.620264 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.620278 | orchestrator | 2025-03-11 01:24:07.620292 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-11 01:24:07.620306 | orchestrator | Tuesday 11 March 2025 01:23:31 +0000 (0:00:03.892) 0:00:54.668 ********* 2025-03-11 01:24:07.620320 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.620333 | orchestrator | 2025-03-11 01:24:07.620347 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-11 01:24:07.620361 | orchestrator | Tuesday 11 March 2025 01:23:35 +0000 (0:00:04.085) 0:00:58.754 ********* 2025-03-11 01:24:07.620375 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.620389 | orchestrator | 2025-03-11 01:24:07.620403 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-11 01:24:07.620417 | orchestrator | Tuesday 11 March 2025 01:23:37 +0000 (0:00:02.287) 0:01:01.041 ********* 2025-03-11 01:24:07.620431 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.620444 | orchestrator | 2025-03-11 01:24:07.620458 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:24:07.620479 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.620493 | orchestrator | 2025-03-11 01:24:07.620507 | orchestrator | Tuesday 11 March 2025 01:23:38 +0000 (0:00:01.135) 0:01:02.177 ********* 2025-03-11 01:24:07.620520 | orchestrator | =============================================================================== 2025-03-11 01:24:07.620534 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.75s 2025-03-11 01:24:07.620548 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 4.09s 2025-03-11 01:24:07.620562 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.89s 2025-03-11 01:24:07.620606 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.47s 2025-03-11 01:24:07.620622 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 2.29s 2025-03-11 01:24:07.620642 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.25s 2025-03-11 01:24:07.620656 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.86s 2025-03-11 01:24:07.620670 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.38s 2025-03-11 01:24:07.620683 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.25s 2025-03-11 01:24:07.620698 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.14s 2025-03-11 01:24:07.620711 | orchestrator | 2025-03-11 01:24:07.620725 | orchestrator | 2025-03-11 01:24:07.620739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:24:07.620753 | orchestrator | 2025-03-11 01:24:07.620767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:24:07.620781 | orchestrator | Tuesday 11 March 2025 01:22:36 +0000 (0:00:00.684) 0:00:00.684 ********* 2025-03-11 01:24:07.620794 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-11 01:24:07.620809 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-11 01:24:07.620822 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-11 01:24:07.620836 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-11 01:24:07.620850 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-11 01:24:07.620864 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-11 01:24:07.620878 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-11 01:24:07.620892 | orchestrator | 2025-03-11 01:24:07.620905 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-11 01:24:07.620919 | orchestrator | 2025-03-11 01:24:07.620933 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-11 01:24:07.620947 | orchestrator | Tuesday 11 March 2025 01:22:40 +0000 (0:00:03.651) 0:00:04.335 ********* 2025-03-11 01:24:07.620974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:24:07.621014 | orchestrator | 2025-03-11 01:24:07.621029 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-11 01:24:07.621043 | orchestrator | Tuesday 11 March 2025 01:22:44 +0000 (0:00:03.964) 0:00:08.300 ********* 2025-03-11 01:24:07.621057 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:24:07.621071 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.621085 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:24:07.621098 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:24:07.621112 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:24:07.621126 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:24:07.621140 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:24:07.621153 | orchestrator | 2025-03-11 01:24:07.621168 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-11 01:24:07.621195 | orchestrator | Tuesday 11 March 2025 01:22:47 +0000 (0:00:03.129) 0:00:11.430 ********* 2025-03-11 01:24:07.621210 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:24:07.621224 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.621238 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:24:07.621252 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:24:07.621266 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:24:07.621280 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:24:07.621294 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:24:07.621307 | orchestrator | 2025-03-11 01:24:07.621321 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-11 01:24:07.621335 | orchestrator | Tuesday 11 March 2025 01:22:51 +0000 (0:00:04.330) 0:00:15.760 ********* 2025-03-11 01:24:07.621349 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:24:07.621363 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:24:07.621377 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.621396 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:24:07.621410 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:24:07.621424 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:24:07.621438 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:24:07.621452 | orchestrator | 2025-03-11 01:24:07.621466 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-11 01:24:07.621480 | orchestrator | Tuesday 11 March 2025 01:22:55 +0000 (0:00:03.264) 0:00:19.025 ********* 2025-03-11 01:24:07.621494 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:24:07.621507 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:24:07.621521 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:24:07.621534 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:24:07.621548 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:24:07.621562 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:24:07.621624 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.621641 | orchestrator | 2025-03-11 01:24:07.621655 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-11 01:24:07.621669 | orchestrator | Tuesday 11 March 2025 01:23:06 +0000 (0:00:11.710) 0:00:30.736 ********* 2025-03-11 01:24:07.621683 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:24:07.621697 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:24:07.621711 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:24:07.621724 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:24:07.621738 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:24:07.621752 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:24:07.621766 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.621779 | orchestrator | 2025-03-11 01:24:07.621794 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-11 01:24:07.621808 | orchestrator | Tuesday 11 March 2025 01:23:26 +0000 (0:00:20.048) 0:00:50.785 ********* 2025-03-11 01:24:07.621823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:24:07.621856 | orchestrator | 2025-03-11 01:24:07.621870 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-11 01:24:07.621885 | orchestrator | Tuesday 11 March 2025 01:23:29 +0000 (0:00:02.950) 0:00:53.735 ********* 2025-03-11 01:24:07.621949 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-11 01:24:07.621964 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-11 01:24:07.621977 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-11 01:24:07.621989 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-11 01:24:07.622001 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-11 01:24:07.622013 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-11 01:24:07.622072 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-11 01:24:07.622097 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-11 01:24:07.622110 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-11 01:24:07.622123 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-11 01:24:07.622135 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-11 01:24:07.622147 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-11 01:24:07.622160 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-11 01:24:07.622172 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-11 01:24:07.622184 | orchestrator | 2025-03-11 01:24:07.622197 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-11 01:24:07.622210 | orchestrator | Tuesday 11 March 2025 01:23:41 +0000 (0:00:12.099) 0:01:05.834 ********* 2025-03-11 01:24:07.622222 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.622235 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:24:07.622247 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:24:07.622260 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:24:07.622272 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:24:07.622285 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:24:07.622297 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:24:07.622309 | orchestrator | 2025-03-11 01:24:07.622322 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-11 01:24:07.622334 | orchestrator | Tuesday 11 March 2025 01:23:45 +0000 (0:00:03.479) 0:01:09.314 ********* 2025-03-11 01:24:07.622346 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:24:07.622359 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.622371 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:24:07.622383 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:24:07.622396 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:24:07.622408 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:24:07.622420 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:24:07.622433 | orchestrator | 2025-03-11 01:24:07.622445 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-11 01:24:07.622458 | orchestrator | Tuesday 11 March 2025 01:23:48 +0000 (0:00:03.309) 0:01:12.623 ********* 2025-03-11 01:24:07.622470 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:24:07.622482 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:24:07.622494 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:24:07.622507 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:24:07.622527 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.622540 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:24:07.622553 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:24:07.622565 | orchestrator | 2025-03-11 01:24:07.622606 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-11 01:24:07.622624 | orchestrator | Tuesday 11 March 2025 01:23:52 +0000 (0:00:03.342) 0:01:15.966 ********* 2025-03-11 01:24:07.622637 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:24:07.622649 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:24:07.622661 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:24:07.622674 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:24:07.622686 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:24:07.622698 | orchestrator | ok: [testbed-manager] 2025-03-11 01:24:07.622711 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:24:07.622723 | orchestrator | 2025-03-11 01:24:07.622735 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-11 01:24:07.622748 | orchestrator | Tuesday 11 March 2025 01:23:54 +0000 (0:00:02.688) 0:01:18.654 ********* 2025-03-11 01:24:07.622760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-11 01:24:07.622774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:24:07.622787 | orchestrator | 2025-03-11 01:24:07.622805 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-11 01:24:07.622817 | orchestrator | Tuesday 11 March 2025 01:23:57 +0000 (0:00:02.249) 0:01:20.903 ********* 2025-03-11 01:24:07.622830 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.622842 | orchestrator | 2025-03-11 01:24:07.622855 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-11 01:24:07.622867 | orchestrator | Tuesday 11 March 2025 01:24:00 +0000 (0:00:03.798) 0:01:24.702 ********* 2025-03-11 01:24:07.622879 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:24:07.622892 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:24:07.622904 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:24:07.622916 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:24:07.622929 | orchestrator | changed: [testbed-manager] 2025-03-11 01:24:07.622949 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:24:07.622962 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:24:07.622975 | orchestrator | 2025-03-11 01:24:07.622987 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:24:07.623000 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623013 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623026 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623043 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623056 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623069 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623081 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:24:07.623093 | orchestrator | 2025-03-11 01:24:07.623106 | orchestrator | Tuesday 11 March 2025 01:24:04 +0000 (0:00:03.766) 0:01:28.468 ********* 2025-03-11 01:24:07.623118 | orchestrator | =============================================================================== 2025-03-11 01:24:07.623131 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.05s 2025-03-11 01:24:07.623143 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 12.10s 2025-03-11 01:24:07.623155 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.71s 2025-03-11 01:24:07.623168 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.33s 2025-03-11 01:24:07.623180 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.97s 2025-03-11 01:24:07.623192 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.80s 2025-03-11 01:24:07.623204 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.77s 2025-03-11 01:24:07.623216 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.65s 2025-03-11 01:24:07.623229 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 3.48s 2025-03-11 01:24:07.623241 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.34s 2025-03-11 01:24:07.623254 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.31s 2025-03-11 01:24:07.623266 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.26s 2025-03-11 01:24:07.623278 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.13s 2025-03-11 01:24:07.623297 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.95s 2025-03-11 01:24:07.623314 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.69s 2025-03-11 01:24:10.663666 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.25s 2025-03-11 01:24:10.663778 | orchestrator | 2025-03-11 01:24:07 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:10.663797 | orchestrator | 2025-03-11 01:24:07 | INFO  | Task bd46ebfa-7c71-4920-aa28-e2a36b87fc1c is in state SUCCESS 2025-03-11 01:24:10.663813 | orchestrator | 2025-03-11 01:24:07 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:10.663827 | orchestrator | 2025-03-11 01:24:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:10.663857 | orchestrator | 2025-03-11 01:24:10 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:13.701620 | orchestrator | 2025-03-11 01:24:10 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:13.701727 | orchestrator | 2025-03-11 01:24:10 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:13.701745 | orchestrator | 2025-03-11 01:24:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:13.701777 | orchestrator | 2025-03-11 01:24:13 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:13.703767 | orchestrator | 2025-03-11 01:24:13 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:16.756645 | orchestrator | 2025-03-11 01:24:13 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:16.756777 | orchestrator | 2025-03-11 01:24:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:16.756814 | orchestrator | 2025-03-11 01:24:16 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:16.762319 | orchestrator | 2025-03-11 01:24:16 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:16.764653 | orchestrator | 2025-03-11 01:24:16 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:19.833536 | orchestrator | 2025-03-11 01:24:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:19.833676 | orchestrator | 2025-03-11 01:24:19 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:19.835903 | orchestrator | 2025-03-11 01:24:19 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:19.841333 | orchestrator | 2025-03-11 01:24:19 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:22.911744 | orchestrator | 2025-03-11 01:24:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:22.911878 | orchestrator | 2025-03-11 01:24:22 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:22.913983 | orchestrator | 2025-03-11 01:24:22 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:22.914807 | orchestrator | 2025-03-11 01:24:22 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:22.915122 | orchestrator | 2025-03-11 01:24:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:26.002524 | orchestrator | 2025-03-11 01:24:25 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:26.007785 | orchestrator | 2025-03-11 01:24:26 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:26.019157 | orchestrator | 2025-03-11 01:24:26 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:29.071211 | orchestrator | 2025-03-11 01:24:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:29.071347 | orchestrator | 2025-03-11 01:24:29 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:29.075712 | orchestrator | 2025-03-11 01:24:29 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:29.077190 | orchestrator | 2025-03-11 01:24:29 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:32.139390 | orchestrator | 2025-03-11 01:24:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:32.139496 | orchestrator | 2025-03-11 01:24:32 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:32.145244 | orchestrator | 2025-03-11 01:24:32 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:32.148024 | orchestrator | 2025-03-11 01:24:32 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:35.208641 | orchestrator | 2025-03-11 01:24:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:35.208787 | orchestrator | 2025-03-11 01:24:35 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:38.260315 | orchestrator | 2025-03-11 01:24:35 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:38.260421 | orchestrator | 2025-03-11 01:24:35 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:38.260439 | orchestrator | 2025-03-11 01:24:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:38.260469 | orchestrator | 2025-03-11 01:24:38 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:38.260990 | orchestrator | 2025-03-11 01:24:38 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:38.261530 | orchestrator | 2025-03-11 01:24:38 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:41.320067 | orchestrator | 2025-03-11 01:24:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:41.320199 | orchestrator | 2025-03-11 01:24:41 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:41.320935 | orchestrator | 2025-03-11 01:24:41 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:41.320976 | orchestrator | 2025-03-11 01:24:41 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:44.390285 | orchestrator | 2025-03-11 01:24:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:44.390405 | orchestrator | 2025-03-11 01:24:44 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:44.390808 | orchestrator | 2025-03-11 01:24:44 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:44.392829 | orchestrator | 2025-03-11 01:24:44 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:47.445516 | orchestrator | 2025-03-11 01:24:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:47.445685 | orchestrator | 2025-03-11 01:24:47 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:47.453371 | orchestrator | 2025-03-11 01:24:47 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:50.501696 | orchestrator | 2025-03-11 01:24:47 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:50.501823 | orchestrator | 2025-03-11 01:24:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:50.501854 | orchestrator | 2025-03-11 01:24:50 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:50.502727 | orchestrator | 2025-03-11 01:24:50 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:50.504183 | orchestrator | 2025-03-11 01:24:50 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:50.504640 | orchestrator | 2025-03-11 01:24:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:53.559299 | orchestrator | 2025-03-11 01:24:53 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:53.560281 | orchestrator | 2025-03-11 01:24:53 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:53.560324 | orchestrator | 2025-03-11 01:24:53 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:53.560385 | orchestrator | 2025-03-11 01:24:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:56.621750 | orchestrator | 2025-03-11 01:24:56 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:56.626413 | orchestrator | 2025-03-11 01:24:56 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:56.632177 | orchestrator | 2025-03-11 01:24:56 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:24:59.683620 | orchestrator | 2025-03-11 01:24:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:24:59.683754 | orchestrator | 2025-03-11 01:24:59 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:24:59.684816 | orchestrator | 2025-03-11 01:24:59 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:24:59.684851 | orchestrator | 2025-03-11 01:24:59 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:02.731816 | orchestrator | 2025-03-11 01:24:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:02.731944 | orchestrator | 2025-03-11 01:25:02 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:25:02.732763 | orchestrator | 2025-03-11 01:25:02 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:02.734223 | orchestrator | 2025-03-11 01:25:02 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:05.793696 | orchestrator | 2025-03-11 01:25:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:05.793829 | orchestrator | 2025-03-11 01:25:05 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:25:08.855969 | orchestrator | 2025-03-11 01:25:05 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:08.856074 | orchestrator | 2025-03-11 01:25:05 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:08.856090 | orchestrator | 2025-03-11 01:25:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:08.856119 | orchestrator | 2025-03-11 01:25:08 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:25:08.859692 | orchestrator | 2025-03-11 01:25:08 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:08.863677 | orchestrator | 2025-03-11 01:25:08 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:11.910985 | orchestrator | 2025-03-11 01:25:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:11.911109 | orchestrator | 2025-03-11 01:25:11 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state STARTED 2025-03-11 01:25:11.913732 | orchestrator | 2025-03-11 01:25:11 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:11.915091 | orchestrator | 2025-03-11 01:25:11 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:14.976957 | orchestrator | 2025-03-11 01:25:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:14.977047 | orchestrator | 2025-03-11 01:25:14.977059 | orchestrator | 2025-03-11 01:25:14.977067 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-11 01:25:14.977076 | orchestrator | 2025-03-11 01:25:14.977085 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-11 01:25:14.977093 | orchestrator | Tuesday 11 March 2025 01:23:02 +0000 (0:00:00.517) 0:00:00.517 ********* 2025-03-11 01:25:14.977102 | orchestrator | ok: [testbed-manager] 2025-03-11 01:25:14.977111 | orchestrator | 2025-03-11 01:25:14.977120 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-11 01:25:14.977128 | orchestrator | Tuesday 11 March 2025 01:23:04 +0000 (0:00:02.159) 0:00:02.677 ********* 2025-03-11 01:25:14.977137 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-11 01:25:14.977146 | orchestrator | 2025-03-11 01:25:14.977154 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-11 01:25:14.977162 | orchestrator | Tuesday 11 March 2025 01:23:05 +0000 (0:00:01.590) 0:00:04.267 ********* 2025-03-11 01:25:14.977171 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.977603 | orchestrator | 2025-03-11 01:25:14.977623 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-11 01:25:14.977633 | orchestrator | Tuesday 11 March 2025 01:23:08 +0000 (0:00:03.029) 0:00:07.297 ********* 2025-03-11 01:25:14.977642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-11 01:25:14.977652 | orchestrator | ok: [testbed-manager] 2025-03-11 01:25:14.977661 | orchestrator | 2025-03-11 01:25:14.977670 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-11 01:25:14.977679 | orchestrator | Tuesday 11 March 2025 01:24:03 +0000 (0:00:54.400) 0:01:01.698 ********* 2025-03-11 01:25:14.977687 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.977696 | orchestrator | 2025-03-11 01:25:14.977705 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:25:14.977714 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:14.977724 | orchestrator | 2025-03-11 01:25:14.977732 | orchestrator | Tuesday 11 March 2025 01:24:06 +0000 (0:00:03.603) 0:01:05.301 ********* 2025-03-11 01:25:14.977742 | orchestrator | =============================================================================== 2025-03-11 01:25:14.977751 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.40s 2025-03-11 01:25:14.977760 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.60s 2025-03-11 01:25:14.977769 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.03s 2025-03-11 01:25:14.977778 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.16s 2025-03-11 01:25:14.977787 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.59s 2025-03-11 01:25:14.977796 | orchestrator | 2025-03-11 01:25:14.977804 | orchestrator | 2025-03-11 01:25:14.977813 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-11 01:25:14.977822 | orchestrator | 2025-03-11 01:25:14.977831 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-11 01:25:14.977840 | orchestrator | Tuesday 11 March 2025 01:22:30 +0000 (0:00:00.433) 0:00:00.433 ********* 2025-03-11 01:25:14.977849 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:25:14.977874 | orchestrator | 2025-03-11 01:25:14.977884 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-11 01:25:14.977892 | orchestrator | Tuesday 11 March 2025 01:22:33 +0000 (0:00:02.151) 0:00:02.585 ********* 2025-03-11 01:25:14.977901 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.977909 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.977917 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.977925 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.977933 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.977941 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.977949 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.977957 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.977966 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.977974 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.977986 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.977994 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.978002 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.978010 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:25:14.978055 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.978064 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.978073 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.978091 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:25:14.978099 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.978107 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.978115 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:25:14.978123 | orchestrator | 2025-03-11 01:25:14.978134 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-11 01:25:14.978143 | orchestrator | Tuesday 11 March 2025 01:22:38 +0000 (0:00:05.560) 0:00:08.145 ********* 2025-03-11 01:25:14.978151 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:25:14.978165 | orchestrator | 2025-03-11 01:25:14.978173 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-11 01:25:14.978181 | orchestrator | Tuesday 11 March 2025 01:22:41 +0000 (0:00:02.810) 0:00:10.955 ********* 2025-03-11 01:25:14.978192 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978255 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.978300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978332 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.978425 | orchestrator | 2025-03-11 01:25:14.978433 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-11 01:25:14.978441 | orchestrator | Tuesday 11 March 2025 01:22:47 +0000 (0:00:05.757) 0:00:16.713 ********* 2025-03-11 01:25:14.978454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978480 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978488 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.978497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978600 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.978608 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.978616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978641 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.978649 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.978661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978691 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.978699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978724 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.978732 | orchestrator | 2025-03-11 01:25:14.978740 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-11 01:25:14.978748 | orchestrator | Tuesday 11 March 2025 01:22:50 +0000 (0:00:03.236) 0:00:19.949 ********* 2025-03-11 01:25:14.978756 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978790 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978807 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.978815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timez2025-03-11 01:25:14 | INFO  | Task ecf53845-a761-4e3e-a0f3-0cacffa461e8 is in state SUCCESS 2025-03-11 01:25:14.978879 | orchestrator | one:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978887 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.978895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978920 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.978928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978961 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.978969 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.978977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.978990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.978998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.979007 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.979015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:25:14.979023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.979031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.979039 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.979047 | orchestrator | 2025-03-11 01:25:14.979055 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-11 01:25:14.979063 | orchestrator | Tuesday 11 March 2025 01:22:54 +0000 (0:00:03.607) 0:00:23.557 ********* 2025-03-11 01:25:14.979071 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.979084 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.979093 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.979101 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.979109 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.979117 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.979125 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.979132 | orchestrator | 2025-03-11 01:25:14.979140 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-11 01:25:14.979149 | orchestrator | Tuesday 11 March 2025 01:22:55 +0000 (0:00:01.572) 0:00:25.130 ********* 2025-03-11 01:25:14.979156 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.979164 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.979172 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.979180 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.979188 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.979196 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.979204 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.979211 | orchestrator | 2025-03-11 01:25:14.979219 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-11 01:25:14.979227 | orchestrator | Tuesday 11 March 2025 01:22:57 +0000 (0:00:01.537) 0:00:26.667 ********* 2025-03-11 01:25:14.979235 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:14.979243 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.979251 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.979259 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.979267 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.979274 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.979282 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.979290 | orchestrator | 2025-03-11 01:25:14.979298 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-11 01:25:14.979310 | orchestrator | Tuesday 11 March 2025 01:23:35 +0000 (0:00:38.434) 0:01:05.102 ********* 2025-03-11 01:25:14.979319 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:14.979359 | orchestrator | ok: [testbed-manager] 2025-03-11 01:25:14.979368 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:25:14.979393 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:25:14.979402 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:25:14.979410 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:25:14.979418 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:25:14.979426 | orchestrator | 2025-03-11 01:25:14.979434 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-11 01:25:14.979442 | orchestrator | Tuesday 11 March 2025 01:23:40 +0000 (0:00:05.268) 0:01:10.371 ********* 2025-03-11 01:25:14.979450 | orchestrator | ok: [testbed-manager] 2025-03-11 01:25:14.979458 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:14.979470 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:25:14.979478 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:25:14.979486 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:25:14.979494 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:25:14.979502 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:25:14.979510 | orchestrator | 2025-03-11 01:25:14.979518 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-11 01:25:14.979526 | orchestrator | Tuesday 11 March 2025 01:23:43 +0000 (0:00:02.571) 0:01:12.943 ********* 2025-03-11 01:25:14.979534 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.979542 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.979575 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.979583 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.979591 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.979599 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.979607 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.979615 | orchestrator | 2025-03-11 01:25:14.979623 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-11 01:25:14.979631 | orchestrator | Tuesday 11 March 2025 01:23:45 +0000 (0:00:02.046) 0:01:14.989 ********* 2025-03-11 01:25:14.979645 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:25:14.979653 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:25:14.979661 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:25:14.979669 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:25:14.979677 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:25:14.979685 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:25:14.979693 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:25:14.979700 | orchestrator | 2025-03-11 01:25:14.979709 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-11 01:25:14.979717 | orchestrator | Tuesday 11 March 2025 01:23:46 +0000 (0:00:01.447) 0:01:16.437 ********* 2025-03-11 01:25:14.979725 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979769 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979821 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.979866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.979946 | orchestrator | 2025-03-11 01:25:14.979954 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-11 01:25:14.979962 | orchestrator | Tuesday 11 March 2025 01:23:54 +0000 (0:00:07.483) 0:01:23.921 ********* 2025-03-11 01:25:14.979970 | orchestrator | [WARNING]: Skipped 2025-03-11 01:25:14.979979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-11 01:25:14.979987 | orchestrator | to this access issue: 2025-03-11 01:25:14.979995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-11 01:25:14.980003 | orchestrator | directory 2025-03-11 01:25:14.980011 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:25:14.980019 | orchestrator | 2025-03-11 01:25:14.980027 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-11 01:25:14.980036 | orchestrator | Tuesday 11 March 2025 01:23:55 +0000 (0:00:01.325) 0:01:25.246 ********* 2025-03-11 01:25:14.980044 | orchestrator | [WARNING]: Skipped 2025-03-11 01:25:14.980052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-11 01:25:14.980060 | orchestrator | to this access issue: 2025-03-11 01:25:14.980068 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-11 01:25:14.980076 | orchestrator | directory 2025-03-11 01:25:14.980084 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:25:14.980092 | orchestrator | 2025-03-11 01:25:14.980104 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-11 01:25:14.980112 | orchestrator | Tuesday 11 March 2025 01:23:56 +0000 (0:00:00.714) 0:01:25.961 ********* 2025-03-11 01:25:14.980120 | orchestrator | [WARNING]: Skipped 2025-03-11 01:25:14.980128 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-11 01:25:14.980136 | orchestrator | to this access issue: 2025-03-11 01:25:14.980144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-11 01:25:14.980152 | orchestrator | directory 2025-03-11 01:25:14.980160 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:25:14.980168 | orchestrator | 2025-03-11 01:25:14.980176 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-11 01:25:14.980183 | orchestrator | Tuesday 11 March 2025 01:23:57 +0000 (0:00:00.963) 0:01:26.924 ********* 2025-03-11 01:25:14.980191 | orchestrator | [WARNING]: Skipped 2025-03-11 01:25:14.980199 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-11 01:25:14.980207 | orchestrator | to this access issue: 2025-03-11 01:25:14.980215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-11 01:25:14.980223 | orchestrator | directory 2025-03-11 01:25:14.980231 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:25:14.980239 | orchestrator | 2025-03-11 01:25:14.980247 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-11 01:25:14.980255 | orchestrator | Tuesday 11 March 2025 01:23:58 +0000 (0:00:01.010) 0:01:27.935 ********* 2025-03-11 01:25:14.980262 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.980270 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.980278 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.980286 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.980294 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.980302 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.980310 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.980317 | orchestrator | 2025-03-11 01:25:14.980325 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-11 01:25:14.980338 | orchestrator | Tuesday 11 March 2025 01:24:04 +0000 (0:00:05.780) 0:01:33.715 ********* 2025-03-11 01:25:14.980346 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980370 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980378 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980386 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980394 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:25:14.980402 | orchestrator | 2025-03-11 01:25:14.980410 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-11 01:25:14.980418 | orchestrator | Tuesday 11 March 2025 01:24:07 +0000 (0:00:03.317) 0:01:37.033 ********* 2025-03-11 01:25:14.980426 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.980434 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.980446 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.980454 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.980462 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.980470 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.980478 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.980486 | orchestrator | 2025-03-11 01:25:14.980494 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-11 01:25:14.980502 | orchestrator | Tuesday 11 March 2025 01:24:09 +0000 (0:00:02.339) 0:01:39.373 ********* 2025-03-11 01:25:14.980514 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980531 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980571 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980583 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980685 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980700 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980717 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980756 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.980776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:25:14.980792 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980801 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.980809 | orchestrator | 2025-03-11 01:25:14.980818 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-11 01:25:14.980826 | orchestrator | Tuesday 11 March 2025 01:24:12 +0000 (0:00:02.384) 0:01:41.757 ********* 2025-03-11 01:25:14.980834 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980842 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980850 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980866 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980874 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980886 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:25:14.980894 | orchestrator | 2025-03-11 01:25:14.980903 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-11 01:25:14.980911 | orchestrator | Tuesday 11 March 2025 01:24:14 +0000 (0:00:02.579) 0:01:44.337 ********* 2025-03-11 01:25:14.980919 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980927 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980951 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980959 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980967 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:25:14.980975 | orchestrator | 2025-03-11 01:25:14.980983 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-11 01:25:14.981015 | orchestrator | Tuesday 11 March 2025 01:24:18 +0000 (0:00:03.578) 0:01:47.915 ********* 2025-03-11 01:25:14.981038 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981119 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:25:14.981217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981239 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:25:14.981289 | orchestrator | 2025-03-11 01:25:14.981297 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-11 01:25:14.981308 | orchestrator | Tuesday 11 March 2025 01:24:23 +0000 (0:00:04.734) 0:01:52.650 ********* 2025-03-11 01:25:14.981317 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.981325 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.981333 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.981341 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.981349 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.981361 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.981369 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.981377 | orchestrator | 2025-03-11 01:25:14.981389 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-11 01:25:14.981397 | orchestrator | Tuesday 11 March 2025 01:24:25 +0000 (0:00:02.143) 0:01:54.794 ********* 2025-03-11 01:25:14.981405 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.981412 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.981420 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.981431 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.981439 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.981447 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.981455 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.981463 | orchestrator | 2025-03-11 01:25:14.981471 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981479 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:01.909) 0:01:56.703 ********* 2025-03-11 01:25:14.981487 | orchestrator | 2025-03-11 01:25:14.981495 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981503 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.060) 0:01:56.764 ********* 2025-03-11 01:25:14.981511 | orchestrator | 2025-03-11 01:25:14.981519 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981526 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.061) 0:01:56.825 ********* 2025-03-11 01:25:14.981534 | orchestrator | 2025-03-11 01:25:14.981542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981567 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.061) 0:01:56.887 ********* 2025-03-11 01:25:14.981575 | orchestrator | 2025-03-11 01:25:14.981583 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981591 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.282) 0:01:57.170 ********* 2025-03-11 01:25:14.981600 | orchestrator | 2025-03-11 01:25:14.981608 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981616 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.073) 0:01:57.243 ********* 2025-03-11 01:25:14.981624 | orchestrator | 2025-03-11 01:25:14.981632 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:25:14.981640 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.069) 0:01:57.313 ********* 2025-03-11 01:25:14.981648 | orchestrator | 2025-03-11 01:25:14.981656 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-11 01:25:14.981664 | orchestrator | Tuesday 11 March 2025 01:24:27 +0000 (0:00:00.091) 0:01:57.405 ********* 2025-03-11 01:25:14.981672 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.981680 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.981688 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.981696 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.981704 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.981712 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.981720 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.981728 | orchestrator | 2025-03-11 01:25:14.981736 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-11 01:25:14.981744 | orchestrator | Tuesday 11 March 2025 01:24:37 +0000 (0:00:10.027) 0:02:07.433 ********* 2025-03-11 01:25:14.981752 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.981760 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.981768 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.981776 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.981784 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.981792 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.981800 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.981808 | orchestrator | 2025-03-11 01:25:14.981816 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-11 01:25:14.981828 | orchestrator | Tuesday 11 March 2025 01:25:00 +0000 (0:00:22.469) 0:02:29.902 ********* 2025-03-11 01:25:14.981836 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:25:14.981844 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:14.981852 | orchestrator | ok: [testbed-manager] 2025-03-11 01:25:14.981861 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:25:14.981869 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:25:14.981877 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:25:14.981885 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:25:14.981893 | orchestrator | 2025-03-11 01:25:14.981901 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-11 01:25:14.981909 | orchestrator | Tuesday 11 March 2025 01:25:03 +0000 (0:00:03.027) 0:02:32.930 ********* 2025-03-11 01:25:14.981917 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:14.981925 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:14.981933 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:25:14.981941 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:25:14.981950 | orchestrator | changed: [testbed-manager] 2025-03-11 01:25:14.981957 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:25:14.981966 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:14.981974 | orchestrator | 2025-03-11 01:25:14.981982 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:25:14.981990 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:14.981999 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:14.982012 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:18.028776 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:18.028886 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:18.028903 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:18.028916 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:25:18.028929 | orchestrator | 2025-03-11 01:25:18.028942 | orchestrator | 2025-03-11 01:25:18.028978 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:25:18.028993 | orchestrator | Tuesday 11 March 2025 01:25:13 +0000 (0:00:10.104) 0:02:43.034 ********* 2025-03-11 01:25:18.029005 | orchestrator | =============================================================================== 2025-03-11 01:25:18.029018 | orchestrator | common : Ensure fluentd image is present for label check --------------- 38.43s 2025-03-11 01:25:18.029031 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 22.47s 2025-03-11 01:25:18.029043 | orchestrator | common : Restart cron container ---------------------------------------- 10.10s 2025-03-11 01:25:18.029056 | orchestrator | common : Restart fluentd container ------------------------------------- 10.03s 2025-03-11 01:25:18.029087 | orchestrator | common : Copying over config.json files for services -------------------- 7.48s 2025-03-11 01:25:18.029101 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.78s 2025-03-11 01:25:18.029113 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.76s 2025-03-11 01:25:18.029125 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.56s 2025-03-11 01:25:18.029138 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 5.27s 2025-03-11 01:25:18.029170 | orchestrator | common : Check common containers ---------------------------------------- 4.73s 2025-03-11 01:25:18.029183 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.61s 2025-03-11 01:25:18.029196 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.58s 2025-03-11 01:25:18.029208 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.32s 2025-03-11 01:25:18.029220 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.24s 2025-03-11 01:25:18.029234 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.03s 2025-03-11 01:25:18.029246 | orchestrator | common : include_tasks -------------------------------------------------- 2.81s 2025-03-11 01:25:18.029259 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.58s 2025-03-11 01:25:18.029271 | orchestrator | common : Set fluentd facts ---------------------------------------------- 2.57s 2025-03-11 01:25:18.029283 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.38s 2025-03-11 01:25:18.029296 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.34s 2025-03-11 01:25:18.029309 | orchestrator | 2025-03-11 01:25:14 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:18.029325 | orchestrator | 2025-03-11 01:25:14 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:18.029339 | orchestrator | 2025-03-11 01:25:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:18.029374 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:18.029503 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:18.030682 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:18.032169 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:18.033891 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:18.035258 | orchestrator | 2025-03-11 01:25:18 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:18.037075 | orchestrator | 2025-03-11 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:21.094182 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:21.097435 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:21.097649 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:21.100020 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:21.102379 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:21.107645 | orchestrator | 2025-03-11 01:25:21 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:24.178680 | orchestrator | 2025-03-11 01:25:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:24.178812 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:24.179117 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:24.180954 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:24.184151 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:24.184438 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:24.184469 | orchestrator | 2025-03-11 01:25:24 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:24.184687 | orchestrator | 2025-03-11 01:25:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:27.260325 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:27.261838 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:27.264267 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:27.269054 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:27.270004 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:27.272067 | orchestrator | 2025-03-11 01:25:27 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:27.272357 | orchestrator | 2025-03-11 01:25:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:30.317220 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:30.319906 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:30.319972 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:30.319997 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:30.321520 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:30.323868 | orchestrator | 2025-03-11 01:25:30 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:33.397956 | orchestrator | 2025-03-11 01:25:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:33.398143 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:33.398424 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:33.398457 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:33.399202 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:33.399918 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:33.400707 | orchestrator | 2025-03-11 01:25:33 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:33.400827 | orchestrator | 2025-03-11 01:25:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:36.457655 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:36.458718 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:36.458756 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:36.458810 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:36.460405 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:36.465104 | orchestrator | 2025-03-11 01:25:36 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:39.531874 | orchestrator | 2025-03-11 01:25:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:39.531988 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:39.534386 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:39.536931 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:39.540071 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:39.542960 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:39.544816 | orchestrator | 2025-03-11 01:25:39 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:39.545015 | orchestrator | 2025-03-11 01:25:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:42.687357 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:42.692745 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:42.692796 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:42.693140 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state STARTED 2025-03-11 01:25:42.694649 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:42.695210 | orchestrator | 2025-03-11 01:25:42 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:45.750420 | orchestrator | 2025-03-11 01:25:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:45.750565 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:25:45.752302 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:45.752864 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:45.754680 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:45.756626 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task 47b80882-b5e0-43f8-94ff-63a4261298e6 is in state SUCCESS 2025-03-11 01:25:45.757628 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:45.758693 | orchestrator | 2025-03-11 01:25:45 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:48.798665 | orchestrator | 2025-03-11 01:25:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:48.798826 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:25:48.798916 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:48.798963 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:48.798979 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:48.798994 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:48.799013 | orchestrator | 2025-03-11 01:25:48 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:51.841834 | orchestrator | 2025-03-11 01:25:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:51.841959 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:25:51.845978 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:51.852652 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:51.857082 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:51.861569 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:51.864348 | orchestrator | 2025-03-11 01:25:51 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state STARTED 2025-03-11 01:25:54.918487 | orchestrator | 2025-03-11 01:25:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:54.918681 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:25:54.918769 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:54.921153 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:54.921673 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:54.923220 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:25:54.924190 | orchestrator | 2025-03-11 01:25:54 | INFO  | Task 2b266e43-94c6-4793-b551-51b5c3a9ea8a is in state SUCCESS 2025-03-11 01:25:54.926476 | orchestrator | 2025-03-11 01:25:54.926523 | orchestrator | 2025-03-11 01:25:54.926582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:25:54.926598 | orchestrator | 2025-03-11 01:25:54.926612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:25:54.926626 | orchestrator | Tuesday 11 March 2025 01:25:19 +0000 (0:00:00.477) 0:00:00.477 ********* 2025-03-11 01:25:54.926641 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:54.926656 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:25:54.926670 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:25:54.926684 | orchestrator | 2025-03-11 01:25:54.926699 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:25:54.926713 | orchestrator | Tuesday 11 March 2025 01:25:20 +0000 (0:00:01.196) 0:00:01.674 ********* 2025-03-11 01:25:54.926728 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-11 01:25:54.926742 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-11 01:25:54.926763 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-11 01:25:54.926777 | orchestrator | 2025-03-11 01:25:54.926792 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-11 01:25:54.926806 | orchestrator | 2025-03-11 01:25:54.926820 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-11 01:25:54.926854 | orchestrator | Tuesday 11 March 2025 01:25:21 +0000 (0:00:01.061) 0:00:02.735 ********* 2025-03-11 01:25:54.926869 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:25:54.926884 | orchestrator | 2025-03-11 01:25:54.926899 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-11 01:25:54.926913 | orchestrator | Tuesday 11 March 2025 01:25:23 +0000 (0:00:01.802) 0:00:04.538 ********* 2025-03-11 01:25:54.926927 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-11 01:25:54.926942 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-11 01:25:54.926956 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-11 01:25:54.926970 | orchestrator | 2025-03-11 01:25:54.926984 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-11 01:25:54.926998 | orchestrator | Tuesday 11 March 2025 01:25:25 +0000 (0:00:01.657) 0:00:06.195 ********* 2025-03-11 01:25:54.927013 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-11 01:25:54.927027 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-11 01:25:54.927041 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-11 01:25:54.927055 | orchestrator | 2025-03-11 01:25:54.927070 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-11 01:25:54.927084 | orchestrator | Tuesday 11 March 2025 01:25:28 +0000 (0:00:03.612) 0:00:09.808 ********* 2025-03-11 01:25:54.927098 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:54.927112 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:54.927127 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:54.927141 | orchestrator | 2025-03-11 01:25:54.927155 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-11 01:25:54.927169 | orchestrator | Tuesday 11 March 2025 01:25:33 +0000 (0:00:04.201) 0:00:14.009 ********* 2025-03-11 01:25:54.927183 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:54.927198 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:54.927212 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:54.927226 | orchestrator | 2025-03-11 01:25:54.927244 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:25:54.927259 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:54.927275 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:54.927290 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:54.927304 | orchestrator | 2025-03-11 01:25:54.927318 | orchestrator | 2025-03-11 01:25:54.927332 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:25:54.927346 | orchestrator | Tuesday 11 March 2025 01:25:40 +0000 (0:00:07.772) 0:00:21.782 ********* 2025-03-11 01:25:54.927360 | orchestrator | =============================================================================== 2025-03-11 01:25:54.927374 | orchestrator | memcached : Restart memcached container --------------------------------- 7.77s 2025-03-11 01:25:54.927389 | orchestrator | memcached : Check memcached container ----------------------------------- 4.20s 2025-03-11 01:25:54.927403 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.61s 2025-03-11 01:25:54.927417 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.80s 2025-03-11 01:25:54.927431 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.66s 2025-03-11 01:25:54.927445 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2025-03-11 01:25:54.927459 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-03-11 01:25:54.927473 | orchestrator | 2025-03-11 01:25:54.927487 | orchestrator | 2025-03-11 01:25:54.927501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:25:54.927521 | orchestrator | 2025-03-11 01:25:54.927557 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:25:54.927572 | orchestrator | Tuesday 11 March 2025 01:25:20 +0000 (0:00:01.079) 0:00:01.079 ********* 2025-03-11 01:25:54.927586 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:25:54.927601 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:25:54.927615 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:25:54.927629 | orchestrator | 2025-03-11 01:25:54.927643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:25:54.927667 | orchestrator | Tuesday 11 March 2025 01:25:22 +0000 (0:00:01.364) 0:00:02.443 ********* 2025-03-11 01:25:54.927682 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-11 01:25:54.927697 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-11 01:25:54.927711 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-11 01:25:54.927725 | orchestrator | 2025-03-11 01:25:54.927739 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-11 01:25:54.927753 | orchestrator | 2025-03-11 01:25:54.927767 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-11 01:25:54.927781 | orchestrator | Tuesday 11 March 2025 01:25:22 +0000 (0:00:00.741) 0:00:03.184 ********* 2025-03-11 01:25:54.927795 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:25:54.927809 | orchestrator | 2025-03-11 01:25:54.927823 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-11 01:25:54.927838 | orchestrator | Tuesday 11 March 2025 01:25:24 +0000 (0:00:01.674) 0:00:04.859 ********* 2025-03-11 01:25:54.927854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.927971 | orchestrator | 2025-03-11 01:25:54.927985 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-11 01:25:54.927999 | orchestrator | Tuesday 11 March 2025 01:25:27 +0000 (0:00:02.847) 0:00:07.707 ********* 2025-03-11 01:25:54.928014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928119 | orchestrator | 2025-03-11 01:25:54.928133 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-11 01:25:54.928147 | orchestrator | Tuesday 11 March 2025 01:25:30 +0000 (0:00:03.457) 0:00:11.164 ********* 2025-03-11 01:25:54.928162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928264 | orchestrator | 2025-03-11 01:25:54.928278 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-11 01:25:54.928292 | orchestrator | Tuesday 11 March 2025 01:25:36 +0000 (0:00:05.623) 0:00:16.787 ********* 2025-03-11 01:25:54.928307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:54.928392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:25:57.969152 | orchestrator | 2025-03-11 01:25:57.969261 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:25:57.969281 | orchestrator | Tuesday 11 March 2025 01:25:39 +0000 (0:00:02.994) 0:00:19.783 ********* 2025-03-11 01:25:57.969296 | orchestrator | 2025-03-11 01:25:57.969311 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:25:57.969326 | orchestrator | Tuesday 11 March 2025 01:25:39 +0000 (0:00:00.338) 0:00:20.121 ********* 2025-03-11 01:25:57.969340 | orchestrator | 2025-03-11 01:25:57.969354 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:25:57.969368 | orchestrator | Tuesday 11 March 2025 01:25:39 +0000 (0:00:00.075) 0:00:20.196 ********* 2025-03-11 01:25:57.969383 | orchestrator | 2025-03-11 01:25:57.969405 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-11 01:25:57.969419 | orchestrator | Tuesday 11 March 2025 01:25:40 +0000 (0:00:00.400) 0:00:20.597 ********* 2025-03-11 01:25:57.969441 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:57.969457 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:57.969471 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:57.969485 | orchestrator | 2025-03-11 01:25:57.969499 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-11 01:25:57.969513 | orchestrator | Tuesday 11 March 2025 01:25:46 +0000 (0:00:06.384) 0:00:26.981 ********* 2025-03-11 01:25:57.969595 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:25:57.969614 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:25:57.969628 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:25:57.969642 | orchestrator | 2025-03-11 01:25:57.969656 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:25:57.969670 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:57.969743 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:57.969761 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:25:57.969778 | orchestrator | 2025-03-11 01:25:57.969793 | orchestrator | 2025-03-11 01:25:57.969808 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:25:57.969824 | orchestrator | Tuesday 11 March 2025 01:25:52 +0000 (0:00:06.247) 0:00:33.229 ********* 2025-03-11 01:25:57.969839 | orchestrator | =============================================================================== 2025-03-11 01:25:57.969854 | orchestrator | redis : Restart redis container ----------------------------------------- 6.38s 2025-03-11 01:25:57.969869 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.25s 2025-03-11 01:25:57.969884 | orchestrator | redis : Copying over redis config files --------------------------------- 5.62s 2025-03-11 01:25:57.969900 | orchestrator | redis : Copying over default config.json files -------------------------- 3.46s 2025-03-11 01:25:57.969915 | orchestrator | redis : Check redis containers ------------------------------------------ 3.00s 2025-03-11 01:25:57.969931 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.85s 2025-03-11 01:25:57.969946 | orchestrator | redis : include_tasks --------------------------------------------------- 1.67s 2025-03-11 01:25:57.969962 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.37s 2025-03-11 01:25:57.969977 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.81s 2025-03-11 01:25:57.969993 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-03-11 01:25:57.970008 | orchestrator | 2025-03-11 01:25:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:25:57.970087 | orchestrator | 2025-03-11 01:25:57 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:25:57.970363 | orchestrator | 2025-03-11 01:25:57 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:25:57.970390 | orchestrator | 2025-03-11 01:25:57 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:25:57.970411 | orchestrator | 2025-03-11 01:25:57 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:25:57.971074 | orchestrator | 2025-03-11 01:25:57 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:01.040703 | orchestrator | 2025-03-11 01:25:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:01.040816 | orchestrator | 2025-03-11 01:26:01 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:01.042845 | orchestrator | 2025-03-11 01:26:01 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:01.042897 | orchestrator | 2025-03-11 01:26:01 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:01.050171 | orchestrator | 2025-03-11 01:26:01 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:01.054871 | orchestrator | 2025-03-11 01:26:01 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:04.087987 | orchestrator | 2025-03-11 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:04.088109 | orchestrator | 2025-03-11 01:26:04 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:04.091435 | orchestrator | 2025-03-11 01:26:04 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:04.093857 | orchestrator | 2025-03-11 01:26:04 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:04.093915 | orchestrator | 2025-03-11 01:26:04 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:04.095707 | orchestrator | 2025-03-11 01:26:04 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:04.095811 | orchestrator | 2025-03-11 01:26:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:07.135486 | orchestrator | 2025-03-11 01:26:07 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:07.146459 | orchestrator | 2025-03-11 01:26:07 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:07.146639 | orchestrator | 2025-03-11 01:26:07 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:07.147544 | orchestrator | 2025-03-11 01:26:07 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:07.148707 | orchestrator | 2025-03-11 01:26:07 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:10.193870 | orchestrator | 2025-03-11 01:26:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:10.193962 | orchestrator | 2025-03-11 01:26:10 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:10.194913 | orchestrator | 2025-03-11 01:26:10 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:10.196782 | orchestrator | 2025-03-11 01:26:10 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:10.198908 | orchestrator | 2025-03-11 01:26:10 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:10.203071 | orchestrator | 2025-03-11 01:26:10 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:13.243659 | orchestrator | 2025-03-11 01:26:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:13.243788 | orchestrator | 2025-03-11 01:26:13 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:13.244624 | orchestrator | 2025-03-11 01:26:13 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:13.244655 | orchestrator | 2025-03-11 01:26:13 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:13.248338 | orchestrator | 2025-03-11 01:26:13 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:13.250989 | orchestrator | 2025-03-11 01:26:13 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:13.251500 | orchestrator | 2025-03-11 01:26:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:16.320769 | orchestrator | 2025-03-11 01:26:16 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:16.324194 | orchestrator | 2025-03-11 01:26:16 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:16.330536 | orchestrator | 2025-03-11 01:26:16 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:16.334949 | orchestrator | 2025-03-11 01:26:16 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:16.342372 | orchestrator | 2025-03-11 01:26:16 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:19.424174 | orchestrator | 2025-03-11 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:19.424298 | orchestrator | 2025-03-11 01:26:19 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:19.426144 | orchestrator | 2025-03-11 01:26:19 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:19.428311 | orchestrator | 2025-03-11 01:26:19 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:19.428349 | orchestrator | 2025-03-11 01:26:19 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:19.428374 | orchestrator | 2025-03-11 01:26:19 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:22.497827 | orchestrator | 2025-03-11 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:22.497943 | orchestrator | 2025-03-11 01:26:22 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:25.579640 | orchestrator | 2025-03-11 01:26:22 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:25.579767 | orchestrator | 2025-03-11 01:26:22 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:25.579786 | orchestrator | 2025-03-11 01:26:22 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:25.579801 | orchestrator | 2025-03-11 01:26:22 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:25.579816 | orchestrator | 2025-03-11 01:26:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:25.579846 | orchestrator | 2025-03-11 01:26:25 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:25.589102 | orchestrator | 2025-03-11 01:26:25 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:25.593137 | orchestrator | 2025-03-11 01:26:25 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:25.595927 | orchestrator | 2025-03-11 01:26:25 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:25.601034 | orchestrator | 2025-03-11 01:26:25 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:25.603116 | orchestrator | 2025-03-11 01:26:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:28.731900 | orchestrator | 2025-03-11 01:26:28 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:28.734371 | orchestrator | 2025-03-11 01:26:28 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:28.734425 | orchestrator | 2025-03-11 01:26:28 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:28.737029 | orchestrator | 2025-03-11 01:26:28 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:28.737077 | orchestrator | 2025-03-11 01:26:28 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:31.794183 | orchestrator | 2025-03-11 01:26:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:31.794326 | orchestrator | 2025-03-11 01:26:31 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:31.805768 | orchestrator | 2025-03-11 01:26:31 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:31.808618 | orchestrator | 2025-03-11 01:26:31 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:31.813486 | orchestrator | 2025-03-11 01:26:31 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:31.813556 | orchestrator | 2025-03-11 01:26:31 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:34.892053 | orchestrator | 2025-03-11 01:26:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:34.892156 | orchestrator | 2025-03-11 01:26:34 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:34.895200 | orchestrator | 2025-03-11 01:26:34 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:34.896722 | orchestrator | 2025-03-11 01:26:34 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:34.898963 | orchestrator | 2025-03-11 01:26:34 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:34.901077 | orchestrator | 2025-03-11 01:26:34 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:34.901577 | orchestrator | 2025-03-11 01:26:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:37.973474 | orchestrator | 2025-03-11 01:26:37 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:37.973896 | orchestrator | 2025-03-11 01:26:37 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:37.974919 | orchestrator | 2025-03-11 01:26:37 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:37.975866 | orchestrator | 2025-03-11 01:26:37 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:37.976778 | orchestrator | 2025-03-11 01:26:37 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:41.064320 | orchestrator | 2025-03-11 01:26:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:41.064449 | orchestrator | 2025-03-11 01:26:41 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:41.068586 | orchestrator | 2025-03-11 01:26:41 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:41.068636 | orchestrator | 2025-03-11 01:26:41 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:41.074765 | orchestrator | 2025-03-11 01:26:41 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:41.079702 | orchestrator | 2025-03-11 01:26:41 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:44.149718 | orchestrator | 2025-03-11 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:44.149851 | orchestrator | 2025-03-11 01:26:44 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:44.150142 | orchestrator | 2025-03-11 01:26:44 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:44.151438 | orchestrator | 2025-03-11 01:26:44 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:44.153451 | orchestrator | 2025-03-11 01:26:44 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:44.154882 | orchestrator | 2025-03-11 01:26:44 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:47.203592 | orchestrator | 2025-03-11 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:47.203711 | orchestrator | 2025-03-11 01:26:47 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:47.208377 | orchestrator | 2025-03-11 01:26:47 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:47.208427 | orchestrator | 2025-03-11 01:26:47 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:47.212042 | orchestrator | 2025-03-11 01:26:47 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:47.215160 | orchestrator | 2025-03-11 01:26:47 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:50.302788 | orchestrator | 2025-03-11 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:50.302921 | orchestrator | 2025-03-11 01:26:50 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:50.303288 | orchestrator | 2025-03-11 01:26:50 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:50.304453 | orchestrator | 2025-03-11 01:26:50 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:50.305643 | orchestrator | 2025-03-11 01:26:50 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:50.307955 | orchestrator | 2025-03-11 01:26:50 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:53.355749 | orchestrator | 2025-03-11 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:53.355882 | orchestrator | 2025-03-11 01:26:53 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:53.358263 | orchestrator | 2025-03-11 01:26:53 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:53.364419 | orchestrator | 2025-03-11 01:26:53 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:53.368408 | orchestrator | 2025-03-11 01:26:53 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:53.369996 | orchestrator | 2025-03-11 01:26:53 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:53.372984 | orchestrator | 2025-03-11 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:56.444543 | orchestrator | 2025-03-11 01:26:56 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:56.447432 | orchestrator | 2025-03-11 01:26:56 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:56.450590 | orchestrator | 2025-03-11 01:26:56 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:56.454274 | orchestrator | 2025-03-11 01:26:56 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:56.459623 | orchestrator | 2025-03-11 01:26:56 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:26:59.530663 | orchestrator | 2025-03-11 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:26:59.530790 | orchestrator | 2025-03-11 01:26:59 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:26:59.534180 | orchestrator | 2025-03-11 01:26:59 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:26:59.534990 | orchestrator | 2025-03-11 01:26:59 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:26:59.536105 | orchestrator | 2025-03-11 01:26:59 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:26:59.538264 | orchestrator | 2025-03-11 01:26:59 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:02.617310 | orchestrator | 2025-03-11 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:02.617445 | orchestrator | 2025-03-11 01:27:02 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:02.619835 | orchestrator | 2025-03-11 01:27:02 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:02.621630 | orchestrator | 2025-03-11 01:27:02 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:02.624277 | orchestrator | 2025-03-11 01:27:02 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:27:02.624312 | orchestrator | 2025-03-11 01:27:02 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:02.624727 | orchestrator | 2025-03-11 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:05.686522 | orchestrator | 2025-03-11 01:27:05 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:05.688223 | orchestrator | 2025-03-11 01:27:05 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:05.690630 | orchestrator | 2025-03-11 01:27:05 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:05.690668 | orchestrator | 2025-03-11 01:27:05 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state STARTED 2025-03-11 01:27:05.692744 | orchestrator | 2025-03-11 01:27:05 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:08.750946 | orchestrator | 2025-03-11 01:27:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:08.751081 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:08.751545 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:08.751580 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:08.752411 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task 5769c09e-ef7f-413f-9683-fa8033b3975b is in state SUCCESS 2025-03-11 01:27:08.754906 | orchestrator | 2025-03-11 01:27:08.754952 | orchestrator | 2025-03-11 01:27:08.754967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:27:08.754991 | orchestrator | 2025-03-11 01:27:08.755006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:27:08.755020 | orchestrator | Tuesday 11 March 2025 01:25:19 +0000 (0:00:00.634) 0:00:00.635 ********* 2025-03-11 01:27:08.755035 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:27:08.755055 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:27:08.755069 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:27:08.755083 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:27:08.755097 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:27:08.755111 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:27:08.755125 | orchestrator | 2025-03-11 01:27:08.755139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:27:08.755153 | orchestrator | Tuesday 11 March 2025 01:25:21 +0000 (0:00:01.835) 0:00:02.470 ********* 2025-03-11 01:27:08.755167 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755182 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755196 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755210 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755224 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755238 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:27:08.755252 | orchestrator | 2025-03-11 01:27:08.755266 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-11 01:27:08.755280 | orchestrator | 2025-03-11 01:27:08.755299 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-11 01:27:08.755333 | orchestrator | Tuesday 11 March 2025 01:25:24 +0000 (0:00:02.490) 0:00:04.961 ********* 2025-03-11 01:27:08.755349 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:27:08.755365 | orchestrator | 2025-03-11 01:27:08.755379 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-11 01:27:08.755393 | orchestrator | Tuesday 11 March 2025 01:25:28 +0000 (0:00:03.813) 0:00:08.775 ********* 2025-03-11 01:27:08.755407 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-11 01:27:08.755422 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-11 01:27:08.755436 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-11 01:27:08.755449 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-11 01:27:08.755463 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-11 01:27:08.755478 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-11 01:27:08.755519 | orchestrator | 2025-03-11 01:27:08.755535 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-11 01:27:08.755551 | orchestrator | Tuesday 11 March 2025 01:25:30 +0000 (0:00:02.013) 0:00:10.788 ********* 2025-03-11 01:27:08.755567 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-11 01:27:08.755583 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-11 01:27:08.755598 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-11 01:27:08.755615 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-11 01:27:08.755631 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-11 01:27:08.755646 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-11 01:27:08.755662 | orchestrator | 2025-03-11 01:27:08.755678 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-11 01:27:08.755700 | orchestrator | Tuesday 11 March 2025 01:25:33 +0000 (0:00:03.749) 0:00:14.538 ********* 2025-03-11 01:27:08.755716 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-11 01:27:08.755732 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:27:08.755749 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-11 01:27:08.755764 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:27:08.755780 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-11 01:27:08.755796 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:27:08.755812 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-11 01:27:08.755829 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:27:08.755844 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-11 01:27:08.755858 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:27:08.755872 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-11 01:27:08.755886 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:27:08.755901 | orchestrator | 2025-03-11 01:27:08.755915 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-11 01:27:08.755929 | orchestrator | Tuesday 11 March 2025 01:25:36 +0000 (0:00:03.077) 0:00:17.615 ********* 2025-03-11 01:27:08.755943 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:27:08.755958 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:27:08.755972 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:27:08.755986 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:27:08.756000 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:27:08.756014 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:27:08.756028 | orchestrator | 2025-03-11 01:27:08.756042 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-11 01:27:08.756057 | orchestrator | Tuesday 11 March 2025 01:25:37 +0000 (0:00:01.059) 0:00:18.675 ********* 2025-03-11 01:27:08.756084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756306 | orchestrator | 2025-03-11 01:27:08.756320 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-11 01:27:08.756334 | orchestrator | Tuesday 11 March 2025 01:25:43 +0000 (0:00:05.223) 0:00:23.898 ********* 2025-03-11 01:27:08.756349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.756624 | orchestrator | 2025-03-11 01:27:08.756638 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-11 01:27:08.756853 | orchestrator | Tuesday 11 March 2025 01:25:49 +0000 (0:00:06.024) 0:00:29.922 ********* 2025-03-11 01:27:08.756934 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:27:08.756953 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:27:08.756968 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:27:08.756982 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:27:08.756996 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:27:08.757010 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:27:08.757026 | orchestrator | 2025-03-11 01:27:08.757041 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-11 01:27:08.757056 | orchestrator | Tuesday 11 March 2025 01:25:55 +0000 (0:00:05.870) 0:00:35.793 ********* 2025-03-11 01:27:08.757070 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:27:08.757084 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:27:08.757097 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:27:08.757111 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:27:08.757125 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:27:08.757139 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:27:08.757152 | orchestrator | 2025-03-11 01:27:08.757167 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-11 01:27:08.757181 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:03.257) 0:00:39.050 ********* 2025-03-11 01:27:08.757195 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:27:08.757208 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:27:08.757222 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:27:08.757236 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:27:08.757250 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:27:08.757263 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:27:08.757277 | orchestrator | 2025-03-11 01:27:08.757291 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-11 01:27:08.757305 | orchestrator | Tuesday 11 March 2025 01:26:01 +0000 (0:00:02.876) 0:00:41.926 ********* 2025-03-11 01:27:08.757321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:27:08.757830 | orchestrator | 2025-03-11 01:27:08.757844 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.757859 | orchestrator | Tuesday 11 March 2025 01:26:05 +0000 (0:00:03.929) 0:00:45.856 ********* 2025-03-11 01:27:08.757873 | orchestrator | 2025-03-11 01:27:08.757888 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.757902 | orchestrator | Tuesday 11 March 2025 01:26:05 +0000 (0:00:00.291) 0:00:46.147 ********* 2025-03-11 01:27:08.757916 | orchestrator | 2025-03-11 01:27:08.757930 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.757944 | orchestrator | Tuesday 11 March 2025 01:26:05 +0000 (0:00:00.352) 0:00:46.500 ********* 2025-03-11 01:27:08.757958 | orchestrator | 2025-03-11 01:27:08.757972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.757986 | orchestrator | Tuesday 11 March 2025 01:26:05 +0000 (0:00:00.201) 0:00:46.701 ********* 2025-03-11 01:27:08.758000 | orchestrator | 2025-03-11 01:27:08.758014 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.758073 | orchestrator | Tuesday 11 March 2025 01:26:06 +0000 (0:00:00.425) 0:00:47.127 ********* 2025-03-11 01:27:08.758087 | orchestrator | 2025-03-11 01:27:08.758107 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:27:08.758121 | orchestrator | Tuesday 11 March 2025 01:26:06 +0000 (0:00:00.212) 0:00:47.340 ********* 2025-03-11 01:27:08.758135 | orchestrator | 2025-03-11 01:27:08.758149 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-11 01:27:08.758163 | orchestrator | Tuesday 11 March 2025 01:26:07 +0000 (0:00:00.673) 0:00:48.013 ********* 2025-03-11 01:27:08.758178 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:27:08.758192 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:27:08.758206 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:27:08.758220 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:27:08.758234 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:27:08.758248 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:27:08.758262 | orchestrator | 2025-03-11 01:27:08.758276 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-11 01:27:08.758291 | orchestrator | Tuesday 11 March 2025 01:26:17 +0000 (0:00:10.426) 0:00:58.440 ********* 2025-03-11 01:27:08.758312 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:27:08.758328 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:27:08.758342 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:27:08.758356 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:27:08.758375 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:27:08.758397 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:27:08.758420 | orchestrator | 2025-03-11 01:27:08.758445 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-11 01:27:08.758557 | orchestrator | Tuesday 11 March 2025 01:26:21 +0000 (0:00:03.802) 0:01:02.243 ********* 2025-03-11 01:27:08.758579 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:27:08.758593 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:27:08.758607 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:27:08.758627 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:27:08.758653 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:27:08.758676 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:27:08.758700 | orchestrator | 2025-03-11 01:27:08.758722 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-11 01:27:08.758763 | orchestrator | Tuesday 11 March 2025 01:26:32 +0000 (0:00:10.716) 0:01:12.960 ********* 2025-03-11 01:27:08.758789 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-11 01:27:08.758813 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-11 01:27:08.758832 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-11 01:27:08.758847 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-11 01:27:08.758862 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-11 01:27:08.758883 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-11 01:27:08.758897 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-11 01:27:08.758911 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-11 01:27:08.758925 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-11 01:27:08.758939 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-11 01:27:08.758953 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-11 01:27:08.758968 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-11 01:27:08.758982 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.758996 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.759010 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.759024 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.759038 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.759052 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:27:08.759066 | orchestrator | 2025-03-11 01:27:08.759080 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-11 01:27:08.759094 | orchestrator | Tuesday 11 March 2025 01:26:43 +0000 (0:00:11.659) 0:01:24.620 ********* 2025-03-11 01:27:08.759108 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-11 01:27:08.759123 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:27:08.759138 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-11 01:27:08.759152 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:27:08.759166 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-11 01:27:08.759180 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:27:08.759194 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-11 01:27:08.759208 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-11 01:27:08.759222 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-11 01:27:08.759235 | orchestrator | 2025-03-11 01:27:08.759249 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-11 01:27:08.759268 | orchestrator | Tuesday 11 March 2025 01:26:47 +0000 (0:00:03.543) 0:01:28.163 ********* 2025-03-11 01:27:08.759283 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:27:08.759304 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:27:08.759319 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:27:08.759333 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:27:08.759347 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:27:08.759361 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:27:08.759376 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:27:08.759400 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:27:11.809888 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:27:11.810012 | orchestrator | 2025-03-11 01:27:11.810095 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-11 01:27:11.810113 | orchestrator | Tuesday 11 March 2025 01:26:53 +0000 (0:00:06.324) 0:01:34.487 ********* 2025-03-11 01:27:11.810128 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:27:11.810144 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:27:11.810158 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:27:11.810173 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:27:11.810187 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:27:11.810202 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:27:11.810216 | orchestrator | 2025-03-11 01:27:11.810231 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:27:11.810246 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:27:11.810263 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:27:11.810278 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:27:11.810293 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:27:11.810308 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:27:11.810342 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:27:11.810358 | orchestrator | 2025-03-11 01:27:11.810373 | orchestrator | 2025-03-11 01:27:11.810388 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:27:11.810403 | orchestrator | Tuesday 11 March 2025 01:27:06 +0000 (0:00:12.422) 0:01:46.910 ********* 2025-03-11 01:27:11.810418 | orchestrator | =============================================================================== 2025-03-11 01:27:11.810433 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 23.14s 2025-03-11 01:27:11.810448 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 11.66s 2025-03-11 01:27:11.810462 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.43s 2025-03-11 01:27:11.810478 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 6.32s 2025-03-11 01:27:11.810516 | orchestrator | openvswitch : Copying over config.json files for services --------------- 6.02s 2025-03-11 01:27:11.810531 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 5.87s 2025-03-11 01:27:11.810545 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 5.22s 2025-03-11 01:27:11.810559 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.93s 2025-03-11 01:27:11.810573 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.82s 2025-03-11 01:27:11.810586 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.80s 2025-03-11 01:27:11.810626 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.75s 2025-03-11 01:27:11.810640 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.54s 2025-03-11 01:27:11.810659 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.26s 2025-03-11 01:27:11.810675 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.08s 2025-03-11 01:27:11.810689 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.88s 2025-03-11 01:27:11.810703 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.49s 2025-03-11 01:27:11.810717 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.16s 2025-03-11 01:27:11.810731 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2025-03-11 01:27:11.810745 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.84s 2025-03-11 01:27:11.810759 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.06s 2025-03-11 01:27:11.810773 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:11.810787 | orchestrator | 2025-03-11 01:27:08 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:11.810802 | orchestrator | 2025-03-11 01:27:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:11.810837 | orchestrator | 2025-03-11 01:27:11 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:11.812362 | orchestrator | 2025-03-11 01:27:11 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:11.816191 | orchestrator | 2025-03-11 01:27:11 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:11.820809 | orchestrator | 2025-03-11 01:27:11 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:11.822747 | orchestrator | 2025-03-11 01:27:11 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:14.864076 | orchestrator | 2025-03-11 01:27:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:14.864187 | orchestrator | 2025-03-11 01:27:14 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:14.864482 | orchestrator | 2025-03-11 01:27:14 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:14.864527 | orchestrator | 2025-03-11 01:27:14 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:14.865994 | orchestrator | 2025-03-11 01:27:14 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:14.866823 | orchestrator | 2025-03-11 01:27:14 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:14.867034 | orchestrator | 2025-03-11 01:27:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:17.900863 | orchestrator | 2025-03-11 01:27:17 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:17.903484 | orchestrator | 2025-03-11 01:27:17 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:17.905558 | orchestrator | 2025-03-11 01:27:17 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:17.906243 | orchestrator | 2025-03-11 01:27:17 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:17.908933 | orchestrator | 2025-03-11 01:27:17 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:20.949154 | orchestrator | 2025-03-11 01:27:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:20.949210 | orchestrator | 2025-03-11 01:27:20 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:20.949604 | orchestrator | 2025-03-11 01:27:20 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:20.950756 | orchestrator | 2025-03-11 01:27:20 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:20.951699 | orchestrator | 2025-03-11 01:27:20 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:20.952468 | orchestrator | 2025-03-11 01:27:20 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:20.952641 | orchestrator | 2025-03-11 01:27:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:24.018121 | orchestrator | 2025-03-11 01:27:24 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:24.024323 | orchestrator | 2025-03-11 01:27:24 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:24.025786 | orchestrator | 2025-03-11 01:27:24 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:24.025819 | orchestrator | 2025-03-11 01:27:24 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:24.029266 | orchestrator | 2025-03-11 01:27:24 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:27.103746 | orchestrator | 2025-03-11 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:27.103839 | orchestrator | 2025-03-11 01:27:27 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:27.104468 | orchestrator | 2025-03-11 01:27:27 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:27.106770 | orchestrator | 2025-03-11 01:27:27 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:27.109178 | orchestrator | 2025-03-11 01:27:27 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:27.111482 | orchestrator | 2025-03-11 01:27:27 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:30.165079 | orchestrator | 2025-03-11 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:30.165197 | orchestrator | 2025-03-11 01:27:30 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:30.165280 | orchestrator | 2025-03-11 01:27:30 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:30.167365 | orchestrator | 2025-03-11 01:27:30 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:30.169857 | orchestrator | 2025-03-11 01:27:30 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:33.218923 | orchestrator | 2025-03-11 01:27:30 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:33.219033 | orchestrator | 2025-03-11 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:33.219069 | orchestrator | 2025-03-11 01:27:33 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:33.223240 | orchestrator | 2025-03-11 01:27:33 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:33.224702 | orchestrator | 2025-03-11 01:27:33 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:33.227014 | orchestrator | 2025-03-11 01:27:33 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:33.230322 | orchestrator | 2025-03-11 01:27:33 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:36.283132 | orchestrator | 2025-03-11 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:36.283253 | orchestrator | 2025-03-11 01:27:36 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:36.294662 | orchestrator | 2025-03-11 01:27:36 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:36.294699 | orchestrator | 2025-03-11 01:27:36 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:36.297622 | orchestrator | 2025-03-11 01:27:36 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:36.297654 | orchestrator | 2025-03-11 01:27:36 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:39.347965 | orchestrator | 2025-03-11 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:39.348082 | orchestrator | 2025-03-11 01:27:39 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:39.348427 | orchestrator | 2025-03-11 01:27:39 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:39.348460 | orchestrator | 2025-03-11 01:27:39 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:39.350593 | orchestrator | 2025-03-11 01:27:39 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:39.350786 | orchestrator | 2025-03-11 01:27:39 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:39.350895 | orchestrator | 2025-03-11 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:42.398588 | orchestrator | 2025-03-11 01:27:42 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:42.400454 | orchestrator | 2025-03-11 01:27:42 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:42.402127 | orchestrator | 2025-03-11 01:27:42 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:42.405265 | orchestrator | 2025-03-11 01:27:42 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:42.406952 | orchestrator | 2025-03-11 01:27:42 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:42.407841 | orchestrator | 2025-03-11 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:45.462891 | orchestrator | 2025-03-11 01:27:45 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:45.463899 | orchestrator | 2025-03-11 01:27:45 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:45.466575 | orchestrator | 2025-03-11 01:27:45 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:45.469578 | orchestrator | 2025-03-11 01:27:45 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:45.470754 | orchestrator | 2025-03-11 01:27:45 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:45.470883 | orchestrator | 2025-03-11 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:48.519220 | orchestrator | 2025-03-11 01:27:48 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:48.519484 | orchestrator | 2025-03-11 01:27:48 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:48.520213 | orchestrator | 2025-03-11 01:27:48 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:48.520824 | orchestrator | 2025-03-11 01:27:48 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:48.522680 | orchestrator | 2025-03-11 01:27:48 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:51.565567 | orchestrator | 2025-03-11 01:27:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:51.565691 | orchestrator | 2025-03-11 01:27:51 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:51.567937 | orchestrator | 2025-03-11 01:27:51 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:51.572344 | orchestrator | 2025-03-11 01:27:51 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:51.576145 | orchestrator | 2025-03-11 01:27:51 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:51.577712 | orchestrator | 2025-03-11 01:27:51 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:51.577843 | orchestrator | 2025-03-11 01:27:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:54.628272 | orchestrator | 2025-03-11 01:27:54 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:54.630868 | orchestrator | 2025-03-11 01:27:54 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:54.632202 | orchestrator | 2025-03-11 01:27:54 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:54.635940 | orchestrator | 2025-03-11 01:27:54 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:54.636598 | orchestrator | 2025-03-11 01:27:54 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:54.638569 | orchestrator | 2025-03-11 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:27:57.692022 | orchestrator | 2025-03-11 01:27:57 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:27:57.692750 | orchestrator | 2025-03-11 01:27:57 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:27:57.694489 | orchestrator | 2025-03-11 01:27:57 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:27:57.694878 | orchestrator | 2025-03-11 01:27:57 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:27:57.694913 | orchestrator | 2025-03-11 01:27:57 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:27:57.694953 | orchestrator | 2025-03-11 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:00.747978 | orchestrator | 2025-03-11 01:28:00 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:00.748475 | orchestrator | 2025-03-11 01:28:00 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:00.748548 | orchestrator | 2025-03-11 01:28:00 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:00.754412 | orchestrator | 2025-03-11 01:28:00 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:00.755408 | orchestrator | 2025-03-11 01:28:00 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:03.823432 | orchestrator | 2025-03-11 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:03.823587 | orchestrator | 2025-03-11 01:28:03 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:03.824661 | orchestrator | 2025-03-11 01:28:03 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:03.827060 | orchestrator | 2025-03-11 01:28:03 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:03.831690 | orchestrator | 2025-03-11 01:28:03 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:03.838408 | orchestrator | 2025-03-11 01:28:03 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:06.945631 | orchestrator | 2025-03-11 01:28:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:06.945784 | orchestrator | 2025-03-11 01:28:06 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:06.964688 | orchestrator | 2025-03-11 01:28:06 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:06.968606 | orchestrator | 2025-03-11 01:28:06 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:06.968635 | orchestrator | 2025-03-11 01:28:06 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:06.968653 | orchestrator | 2025-03-11 01:28:06 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:10.064378 | orchestrator | 2025-03-11 01:28:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:10.064559 | orchestrator | 2025-03-11 01:28:10 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:10.066435 | orchestrator | 2025-03-11 01:28:10 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:10.066467 | orchestrator | 2025-03-11 01:28:10 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:10.066490 | orchestrator | 2025-03-11 01:28:10 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:10.069295 | orchestrator | 2025-03-11 01:28:10 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:13.143154 | orchestrator | 2025-03-11 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:13.143280 | orchestrator | 2025-03-11 01:28:13 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:13.143845 | orchestrator | 2025-03-11 01:28:13 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:13.146456 | orchestrator | 2025-03-11 01:28:13 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:13.147838 | orchestrator | 2025-03-11 01:28:13 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:13.149408 | orchestrator | 2025-03-11 01:28:13 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:13.149738 | orchestrator | 2025-03-11 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:16.218588 | orchestrator | 2025-03-11 01:28:16 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:16.219403 | orchestrator | 2025-03-11 01:28:16 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:16.227039 | orchestrator | 2025-03-11 01:28:16 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:19.287298 | orchestrator | 2025-03-11 01:28:16 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:19.287408 | orchestrator | 2025-03-11 01:28:16 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:19.287470 | orchestrator | 2025-03-11 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:19.287530 | orchestrator | 2025-03-11 01:28:19 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:19.289476 | orchestrator | 2025-03-11 01:28:19 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:19.292949 | orchestrator | 2025-03-11 01:28:19 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:19.292996 | orchestrator | 2025-03-11 01:28:19 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:19.293018 | orchestrator | 2025-03-11 01:28:19 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state STARTED 2025-03-11 01:28:22.358851 | orchestrator | 2025-03-11 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:22.358995 | orchestrator | 2025-03-11 01:28:22 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:22.361564 | orchestrator | 2025-03-11 01:28:22 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:22.367358 | orchestrator | 2025-03-11 01:28:22 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:22.367394 | orchestrator | 2025-03-11 01:28:22 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:22.369041 | orchestrator | 2025-03-11 01:28:22 | INFO  | Task 32a54bb5-c30f-4ada-b5fd-91ba5c2ca64c is in state SUCCESS 2025-03-11 01:28:22.369087 | orchestrator | 2025-03-11 01:28:22.369104 | orchestrator | 2025-03-11 01:28:22.369118 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-03-11 01:28:22.369133 | orchestrator | 2025-03-11 01:28:22.369147 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-03-11 01:28:22.369161 | orchestrator | Tuesday 11 March 2025 01:23:25 +0000 (0:00:00.342) 0:00:00.342 ********* 2025-03-11 01:28:22.369176 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.369192 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.369206 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.369219 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.369234 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.369247 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.369261 | orchestrator | 2025-03-11 01:28:22.369275 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-03-11 01:28:22.369289 | orchestrator | Tuesday 11 March 2025 01:23:27 +0000 (0:00:01.619) 0:00:01.961 ********* 2025-03-11 01:28:22.369303 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.369318 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.369332 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.369346 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.369360 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.369374 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.369388 | orchestrator | 2025-03-11 01:28:22.369402 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-03-11 01:28:22.369416 | orchestrator | Tuesday 11 March 2025 01:23:29 +0000 (0:00:01.909) 0:00:03.871 ********* 2025-03-11 01:28:22.369430 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.369445 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.369459 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.369473 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.369487 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.369570 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.369585 | orchestrator | 2025-03-11 01:28:22.369600 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-03-11 01:28:22.369614 | orchestrator | Tuesday 11 March 2025 01:23:31 +0000 (0:00:02.558) 0:00:06.430 ********* 2025-03-11 01:28:22.369651 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.369667 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.369683 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.369699 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.369714 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.369730 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.369745 | orchestrator | 2025-03-11 01:28:22.369762 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-03-11 01:28:22.369777 | orchestrator | Tuesday 11 March 2025 01:23:41 +0000 (0:00:09.585) 0:00:16.016 ********* 2025-03-11 01:28:22.369887 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.369903 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.369917 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.369931 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.369944 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.369958 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.369972 | orchestrator | 2025-03-11 01:28:22.369986 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-03-11 01:28:22.370000 | orchestrator | Tuesday 11 March 2025 01:23:43 +0000 (0:00:02.460) 0:00:18.476 ********* 2025-03-11 01:28:22.370068 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.370086 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.370100 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.370114 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.370128 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.370141 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.370155 | orchestrator | 2025-03-11 01:28:22.370169 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-03-11 01:28:22.370183 | orchestrator | Tuesday 11 March 2025 01:23:47 +0000 (0:00:03.237) 0:00:21.714 ********* 2025-03-11 01:28:22.370197 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.370217 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.370231 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.370245 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.370259 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.370272 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.370286 | orchestrator | 2025-03-11 01:28:22.370300 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-03-11 01:28:22.370314 | orchestrator | Tuesday 11 March 2025 01:23:48 +0000 (0:00:01.163) 0:00:22.878 ********* 2025-03-11 01:28:22.370328 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.370342 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.370356 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.370369 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.370383 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.370396 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.370410 | orchestrator | 2025-03-11 01:28:22.370424 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-03-11 01:28:22.370438 | orchestrator | Tuesday 11 March 2025 01:23:51 +0000 (0:00:03.324) 0:00:26.202 ********* 2025-03-11 01:28:22.370457 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370472 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370486 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.370518 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370533 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370547 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.370561 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370580 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370609 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.370624 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370651 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370666 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.370681 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370695 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370709 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.370723 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:28:22.370737 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:28:22.370751 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.370764 | orchestrator | 2025-03-11 01:28:22.370778 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-03-11 01:28:22.370792 | orchestrator | Tuesday 11 March 2025 01:23:53 +0000 (0:00:01.482) 0:00:27.684 ********* 2025-03-11 01:28:22.370806 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.370820 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.370834 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.370848 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.370862 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.370876 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.370890 | orchestrator | 2025-03-11 01:28:22.370904 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-03-11 01:28:22.370919 | orchestrator | Tuesday 11 March 2025 01:23:56 +0000 (0:00:02.939) 0:00:30.624 ********* 2025-03-11 01:28:22.370933 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.370947 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.370961 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.370976 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.370990 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.371004 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.371017 | orchestrator | 2025-03-11 01:28:22.371032 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-03-11 01:28:22.371046 | orchestrator | Tuesday 11 March 2025 01:23:57 +0000 (0:00:01.607) 0:00:32.232 ********* 2025-03-11 01:28:22.371060 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.371074 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.371088 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.371102 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.371116 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.371130 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.371144 | orchestrator | 2025-03-11 01:28:22.371158 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-03-11 01:28:22.371172 | orchestrator | Tuesday 11 March 2025 01:24:03 +0000 (0:00:05.418) 0:00:37.650 ********* 2025-03-11 01:28:22.371186 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.371200 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.371214 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.371228 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.371242 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.371255 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.371269 | orchestrator | 2025-03-11 01:28:22.371284 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-03-11 01:28:22.371298 | orchestrator | Tuesday 11 March 2025 01:24:04 +0000 (0:00:01.051) 0:00:38.702 ********* 2025-03-11 01:28:22.371312 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.371325 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.371339 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.371353 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.371367 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.371388 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.371402 | orchestrator | 2025-03-11 01:28:22.371416 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-03-11 01:28:22.371431 | orchestrator | Tuesday 11 March 2025 01:24:05 +0000 (0:00:01.829) 0:00:40.532 ********* 2025-03-11 01:28:22.371445 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.371459 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.371473 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.371487 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.371537 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.371552 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.371566 | orchestrator | 2025-03-11 01:28:22.371580 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-03-11 01:28:22.371593 | orchestrator | Tuesday 11 March 2025 01:24:06 +0000 (0:00:00.779) 0:00:41.311 ********* 2025-03-11 01:28:22.371607 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-03-11 01:28:22.371622 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-03-11 01:28:22.371636 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.371649 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-03-11 01:28:22.371663 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-03-11 01:28:22.371677 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.371691 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-03-11 01:28:22.371705 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-03-11 01:28:22.371719 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.371733 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-03-11 01:28:22.371747 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-03-11 01:28:22.371761 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.371775 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-03-11 01:28:22.371789 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-03-11 01:28:22.371803 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.371817 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-03-11 01:28:22.371831 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-03-11 01:28:22.371845 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.371859 | orchestrator | 2025-03-11 01:28:22.371873 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-03-11 01:28:22.371894 | orchestrator | Tuesday 11 March 2025 01:24:07 +0000 (0:00:00.887) 0:00:42.199 ********* 2025-03-11 01:28:22.371909 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.371923 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.371937 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.371950 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.371964 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.371978 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.371992 | orchestrator | 2025-03-11 01:28:22.372006 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-03-11 01:28:22.372020 | orchestrator | 2025-03-11 01:28:22.372034 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-03-11 01:28:22.372048 | orchestrator | Tuesday 11 March 2025 01:24:08 +0000 (0:00:01.296) 0:00:43.495 ********* 2025-03-11 01:28:22.372062 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.372076 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.372089 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.372103 | orchestrator | 2025-03-11 01:28:22.372117 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-03-11 01:28:22.372131 | orchestrator | Tuesday 11 March 2025 01:24:09 +0000 (0:00:00.955) 0:00:44.450 ********* 2025-03-11 01:28:22.372145 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.372158 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.372179 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.372193 | orchestrator | 2025-03-11 01:28:22.372207 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-03-11 01:28:22.372221 | orchestrator | Tuesday 11 March 2025 01:24:10 +0000 (0:00:01.126) 0:00:45.577 ********* 2025-03-11 01:28:22.372235 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.372249 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.372263 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.372276 | orchestrator | 2025-03-11 01:28:22.372290 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-03-11 01:28:22.372305 | orchestrator | Tuesday 11 March 2025 01:24:12 +0000 (0:00:01.172) 0:00:46.750 ********* 2025-03-11 01:28:22.372318 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.372332 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.372346 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.372360 | orchestrator | 2025-03-11 01:28:22.372374 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-03-11 01:28:22.372393 | orchestrator | Tuesday 11 March 2025 01:24:12 +0000 (0:00:00.752) 0:00:47.502 ********* 2025-03-11 01:28:22.372408 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.372422 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.372436 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.372450 | orchestrator | 2025-03-11 01:28:22.372464 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-03-11 01:28:22.372477 | orchestrator | Tuesday 11 March 2025 01:24:13 +0000 (0:00:00.383) 0:00:47.885 ********* 2025-03-11 01:28:22.372491 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:28:22.372564 | orchestrator | 2025-03-11 01:28:22.372579 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-03-11 01:28:22.372594 | orchestrator | Tuesday 11 March 2025 01:24:13 +0000 (0:00:00.646) 0:00:48.532 ********* 2025-03-11 01:28:22.372606 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.372619 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.372632 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.372644 | orchestrator | 2025-03-11 01:28:22.372656 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-03-11 01:28:22.372669 | orchestrator | Tuesday 11 March 2025 01:24:15 +0000 (0:00:01.917) 0:00:50.449 ********* 2025-03-11 01:28:22.372681 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.372693 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.372706 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.372718 | orchestrator | 2025-03-11 01:28:22.372730 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-03-11 01:28:22.372743 | orchestrator | Tuesday 11 March 2025 01:24:17 +0000 (0:00:01.437) 0:00:51.887 ********* 2025-03-11 01:28:22.372755 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.372767 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.372780 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.372792 | orchestrator | 2025-03-11 01:28:22.372805 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-03-11 01:28:22.372817 | orchestrator | Tuesday 11 March 2025 01:24:18 +0000 (0:00:00.848) 0:00:52.735 ********* 2025-03-11 01:28:22.372829 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.372842 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.372854 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.372866 | orchestrator | 2025-03-11 01:28:22.372879 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-03-11 01:28:22.372891 | orchestrator | Tuesday 11 March 2025 01:24:21 +0000 (0:00:03.019) 0:00:55.755 ********* 2025-03-11 01:28:22.372904 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.372916 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.372928 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.372941 | orchestrator | 2025-03-11 01:28:22.372953 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-03-11 01:28:22.372972 | orchestrator | Tuesday 11 March 2025 01:24:21 +0000 (0:00:00.608) 0:00:56.364 ********* 2025-03-11 01:28:22.372985 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.372997 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.373009 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.373022 | orchestrator | 2025-03-11 01:28:22.373034 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-03-11 01:28:22.373047 | orchestrator | Tuesday 11 March 2025 01:24:22 +0000 (0:00:00.518) 0:00:56.882 ********* 2025-03-11 01:28:22.373059 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.373071 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.373084 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.373096 | orchestrator | 2025-03-11 01:28:22.373108 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-03-11 01:28:22.373121 | orchestrator | Tuesday 11 March 2025 01:24:23 +0000 (0:00:01.546) 0:00:58.429 ********* 2025-03-11 01:28:22.373139 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:28:22.373153 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:28:22.373166 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:28:22.373178 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:28:22.373191 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:28:22.373203 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:28:22.373216 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:28:22.373228 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:28:22.373241 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:28:22.373253 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:28:22.373272 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:28:22.373284 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:28:22.373296 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-11 01:28:22.373309 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-11 01:28:22.373321 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-03-11 01:28:22.373333 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.373351 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.373363 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.373376 | orchestrator | 2025-03-11 01:28:22.373388 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-03-11 01:28:22.373401 | orchestrator | Tuesday 11 March 2025 01:25:20 +0000 (0:00:56.226) 0:01:54.656 ********* 2025-03-11 01:28:22.373419 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.373432 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.373444 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.373456 | orchestrator | 2025-03-11 01:28:22.373468 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-03-11 01:28:22.373481 | orchestrator | Tuesday 11 March 2025 01:25:20 +0000 (0:00:00.496) 0:01:55.153 ********* 2025-03-11 01:28:22.373493 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.373520 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.373533 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.373545 | orchestrator | 2025-03-11 01:28:22.373558 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-03-11 01:28:22.373570 | orchestrator | Tuesday 11 March 2025 01:25:21 +0000 (0:00:01.197) 0:01:56.350 ********* 2025-03-11 01:28:22.373582 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.373595 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.373607 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.373619 | orchestrator | 2025-03-11 01:28:22.373631 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-03-11 01:28:22.373644 | orchestrator | Tuesday 11 March 2025 01:25:23 +0000 (0:00:01.460) 0:01:57.810 ********* 2025-03-11 01:28:22.373656 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.373668 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.373681 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.373693 | orchestrator | 2025-03-11 01:28:22.373705 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-03-11 01:28:22.373717 | orchestrator | Tuesday 11 March 2025 01:25:37 +0000 (0:00:14.453) 0:02:12.264 ********* 2025-03-11 01:28:22.373730 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.373742 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.373755 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.373767 | orchestrator | 2025-03-11 01:28:22.373779 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-03-11 01:28:22.373791 | orchestrator | Tuesday 11 March 2025 01:25:38 +0000 (0:00:01.253) 0:02:13.518 ********* 2025-03-11 01:28:22.373803 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.373816 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.373828 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.373840 | orchestrator | 2025-03-11 01:28:22.373853 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-03-11 01:28:22.373865 | orchestrator | Tuesday 11 March 2025 01:25:40 +0000 (0:00:01.296) 0:02:14.814 ********* 2025-03-11 01:28:22.373877 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.373890 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.373902 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.373914 | orchestrator | 2025-03-11 01:28:22.373936 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-03-11 01:28:22.373950 | orchestrator | Tuesday 11 March 2025 01:25:41 +0000 (0:00:01.184) 0:02:15.998 ********* 2025-03-11 01:28:22.373962 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.373975 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.373987 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.373999 | orchestrator | 2025-03-11 01:28:22.374012 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-03-11 01:28:22.374050 | orchestrator | Tuesday 11 March 2025 01:25:43 +0000 (0:00:01.761) 0:02:17.759 ********* 2025-03-11 01:28:22.374063 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.374075 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.374088 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.374100 | orchestrator | 2025-03-11 01:28:22.374113 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-03-11 01:28:22.374125 | orchestrator | Tuesday 11 March 2025 01:25:43 +0000 (0:00:00.567) 0:02:18.327 ********* 2025-03-11 01:28:22.374137 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.374156 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.374169 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.374181 | orchestrator | 2025-03-11 01:28:22.374193 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-03-11 01:28:22.374205 | orchestrator | Tuesday 11 March 2025 01:25:44 +0000 (0:00:00.828) 0:02:19.155 ********* 2025-03-11 01:28:22.374218 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.374230 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.374242 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.374254 | orchestrator | 2025-03-11 01:28:22.374267 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-03-11 01:28:22.374279 | orchestrator | Tuesday 11 March 2025 01:25:45 +0000 (0:00:00.878) 0:02:20.034 ********* 2025-03-11 01:28:22.374291 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.374303 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.374316 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.374328 | orchestrator | 2025-03-11 01:28:22.374340 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-03-11 01:28:22.374352 | orchestrator | Tuesday 11 March 2025 01:25:47 +0000 (0:00:01.549) 0:02:21.583 ********* 2025-03-11 01:28:22.374365 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:22.374377 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:22.374389 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:22.374402 | orchestrator | 2025-03-11 01:28:22.374414 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-03-11 01:28:22.374426 | orchestrator | Tuesday 11 March 2025 01:25:48 +0000 (0:00:01.090) 0:02:22.673 ********* 2025-03-11 01:28:22.374439 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.374451 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.374463 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.374475 | orchestrator | 2025-03-11 01:28:22.374488 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-03-11 01:28:22.374517 | orchestrator | Tuesday 11 March 2025 01:25:48 +0000 (0:00:00.333) 0:02:23.007 ********* 2025-03-11 01:28:22.374530 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.374542 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.374554 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.374567 | orchestrator | 2025-03-11 01:28:22.374579 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-03-11 01:28:22.374591 | orchestrator | Tuesday 11 March 2025 01:25:48 +0000 (0:00:00.333) 0:02:23.340 ********* 2025-03-11 01:28:22.374604 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.374616 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.374628 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.374640 | orchestrator | 2025-03-11 01:28:22.374652 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-03-11 01:28:22.374665 | orchestrator | Tuesday 11 March 2025 01:25:49 +0000 (0:00:01.089) 0:02:24.429 ********* 2025-03-11 01:28:22.374677 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.374690 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.374710 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.374724 | orchestrator | 2025-03-11 01:28:22.374736 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-03-11 01:28:22.374749 | orchestrator | Tuesday 11 March 2025 01:25:50 +0000 (0:00:00.828) 0:02:25.257 ********* 2025-03-11 01:28:22.374761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:28:22.374774 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:28:22.374787 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:28:22.374799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:28:22.374812 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:28:22.374830 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:28:22.374846 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:28:22.374860 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:28:22.374872 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:28:22.374884 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-03-11 01:28:22.374897 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:28:22.374909 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-03-11 01:28:22.374927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:28:22.374940 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:28:22.374953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:28:22.374965 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:28:22.374977 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:28:22.374990 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:28:22.375002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:28:22.375015 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:28:22.375027 | orchestrator | 2025-03-11 01:28:22.375040 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-03-11 01:28:22.375052 | orchestrator | 2025-03-11 01:28:22.375065 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-03-11 01:28:22.375077 | orchestrator | Tuesday 11 March 2025 01:25:54 +0000 (0:00:03.748) 0:02:29.006 ********* 2025-03-11 01:28:22.375089 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.375102 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.375114 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.375126 | orchestrator | 2025-03-11 01:28:22.375139 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-03-11 01:28:22.375151 | orchestrator | Tuesday 11 March 2025 01:25:55 +0000 (0:00:00.628) 0:02:29.634 ********* 2025-03-11 01:28:22.375164 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.375176 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.375188 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.375200 | orchestrator | 2025-03-11 01:28:22.375212 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-03-11 01:28:22.375225 | orchestrator | Tuesday 11 March 2025 01:25:57 +0000 (0:00:01.964) 0:02:31.599 ********* 2025-03-11 01:28:22.375237 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.375249 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.375261 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.375273 | orchestrator | 2025-03-11 01:28:22.375290 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-03-11 01:28:22.375302 | orchestrator | Tuesday 11 March 2025 01:25:57 +0000 (0:00:00.457) 0:02:32.056 ********* 2025-03-11 01:28:22.375315 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:28:22.375327 | orchestrator | 2025-03-11 01:28:22.375339 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-03-11 01:28:22.375351 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:00.708) 0:02:32.765 ********* 2025-03-11 01:28:22.375373 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.375385 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.375398 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.375410 | orchestrator | 2025-03-11 01:28:22.375422 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-03-11 01:28:22.375434 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:00.319) 0:02:33.085 ********* 2025-03-11 01:28:22.375446 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.375459 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.375471 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.375483 | orchestrator | 2025-03-11 01:28:22.375534 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-03-11 01:28:22.375549 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:00.334) 0:02:33.420 ********* 2025-03-11 01:28:22.375562 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:22.375574 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:22.375587 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:22.375598 | orchestrator | 2025-03-11 01:28:22.375608 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-03-11 01:28:22.375619 | orchestrator | Tuesday 11 March 2025 01:25:59 +0000 (0:00:00.471) 0:02:33.892 ********* 2025-03-11 01:28:22.375629 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.375639 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.375649 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.375659 | orchestrator | 2025-03-11 01:28:22.375669 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-03-11 01:28:22.375679 | orchestrator | Tuesday 11 March 2025 01:26:01 +0000 (0:00:02.289) 0:02:36.182 ********* 2025-03-11 01:28:22.375689 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:28:22.375699 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:28:22.375709 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:28:22.375719 | orchestrator | 2025-03-11 01:28:22.375729 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-11 01:28:22.375739 | orchestrator | 2025-03-11 01:28:22.375749 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-11 01:28:22.375759 | orchestrator | Tuesday 11 March 2025 01:26:12 +0000 (0:00:10.825) 0:02:47.008 ********* 2025-03-11 01:28:22.375769 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.375779 | orchestrator | 2025-03-11 01:28:22.375790 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-11 01:28:22.375800 | orchestrator | Tuesday 11 March 2025 01:26:12 +0000 (0:00:00.532) 0:02:47.540 ********* 2025-03-11 01:28:22.375809 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.375819 | orchestrator | 2025-03-11 01:28:22.375829 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:28:22.375839 | orchestrator | Tuesday 11 March 2025 01:26:13 +0000 (0:00:00.555) 0:02:48.096 ********* 2025-03-11 01:28:22.375849 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:28:22.375859 | orchestrator | 2025-03-11 01:28:22.375875 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:28:22.375885 | orchestrator | Tuesday 11 March 2025 01:26:14 +0000 (0:00:00.628) 0:02:48.724 ********* 2025-03-11 01:28:22.375895 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.375905 | orchestrator | 2025-03-11 01:28:22.375915 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-11 01:28:22.375925 | orchestrator | Tuesday 11 March 2025 01:26:15 +0000 (0:00:01.268) 0:02:49.993 ********* 2025-03-11 01:28:22.375935 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.375946 | orchestrator | 2025-03-11 01:28:22.375956 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-11 01:28:22.375966 | orchestrator | Tuesday 11 March 2025 01:26:16 +0000 (0:00:00.846) 0:02:50.840 ********* 2025-03-11 01:28:22.375976 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:28:22.375992 | orchestrator | 2025-03-11 01:28:22.376002 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-11 01:28:22.376012 | orchestrator | Tuesday 11 March 2025 01:26:17 +0000 (0:00:01.356) 0:02:52.196 ********* 2025-03-11 01:28:22.376022 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:28:22.376032 | orchestrator | 2025-03-11 01:28:22.376042 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-11 01:28:22.376052 | orchestrator | Tuesday 11 March 2025 01:26:18 +0000 (0:00:00.685) 0:02:52.882 ********* 2025-03-11 01:28:22.376062 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.376072 | orchestrator | 2025-03-11 01:28:22.376082 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-11 01:28:22.376092 | orchestrator | Tuesday 11 March 2025 01:26:18 +0000 (0:00:00.601) 0:02:53.483 ********* 2025-03-11 01:28:22.376102 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.376112 | orchestrator | 2025-03-11 01:28:22.376122 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-03-11 01:28:22.376132 | orchestrator | 2025-03-11 01:28:22.376147 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-03-11 01:28:22.376157 | orchestrator | Tuesday 11 March 2025 01:26:19 +0000 (0:00:00.624) 0:02:54.107 ********* 2025-03-11 01:28:22.376167 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.376177 | orchestrator | 2025-03-11 01:28:22.376187 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-03-11 01:28:22.376197 | orchestrator | Tuesday 11 March 2025 01:26:19 +0000 (0:00:00.196) 0:02:54.304 ********* 2025-03-11 01:28:22.376208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 01:28:22.376219 | orchestrator | 2025-03-11 01:28:22.376229 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-03-11 01:28:22.376239 | orchestrator | Tuesday 11 March 2025 01:26:19 +0000 (0:00:00.247) 0:02:54.552 ********* 2025-03-11 01:28:22.376249 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.376259 | orchestrator | 2025-03-11 01:28:22.376269 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-03-11 01:28:22.376279 | orchestrator | Tuesday 11 March 2025 01:26:21 +0000 (0:00:01.937) 0:02:56.490 ********* 2025-03-11 01:28:22.376289 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.376299 | orchestrator | 2025-03-11 01:28:22.376309 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-03-11 01:28:22.376319 | orchestrator | Tuesday 11 March 2025 01:26:23 +0000 (0:00:01.663) 0:02:58.153 ********* 2025-03-11 01:28:22.376329 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.376339 | orchestrator | 2025-03-11 01:28:22.376349 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-03-11 01:28:22.376359 | orchestrator | Tuesday 11 March 2025 01:26:24 +0000 (0:00:00.739) 0:02:58.893 ********* 2025-03-11 01:28:22.376369 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.376379 | orchestrator | 2025-03-11 01:28:22.376389 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-03-11 01:28:22.376399 | orchestrator | Tuesday 11 March 2025 01:26:24 +0000 (0:00:00.571) 0:02:59.464 ********* 2025-03-11 01:28:22.376409 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.376419 | orchestrator | 2025-03-11 01:28:22.376429 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-03-11 01:28:22.376439 | orchestrator | Tuesday 11 March 2025 01:26:33 +0000 (0:00:08.149) 0:03:07.613 ********* 2025-03-11 01:28:22.376449 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.376459 | orchestrator | 2025-03-11 01:28:22.376469 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-03-11 01:28:22.376479 | orchestrator | Tuesday 11 March 2025 01:26:50 +0000 (0:00:17.327) 0:03:24.941 ********* 2025-03-11 01:28:22.376489 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.376513 | orchestrator | 2025-03-11 01:28:22.376523 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-03-11 01:28:22.376538 | orchestrator | 2025-03-11 01:28:22.376549 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-03-11 01:28:22.376559 | orchestrator | Tuesday 11 March 2025 01:26:50 +0000 (0:00:00.610) 0:03:25.551 ********* 2025-03-11 01:28:22.376569 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.376579 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.376589 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.376599 | orchestrator | 2025-03-11 01:28:22.376609 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-03-11 01:28:22.376619 | orchestrator | Tuesday 11 March 2025 01:26:51 +0000 (0:00:00.719) 0:03:26.271 ********* 2025-03-11 01:28:22.376629 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.376639 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.376649 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.376659 | orchestrator | 2025-03-11 01:28:22.376670 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-03-11 01:28:22.376680 | orchestrator | Tuesday 11 March 2025 01:26:52 +0000 (0:00:00.417) 0:03:26.688 ********* 2025-03-11 01:28:22.376690 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:28:22.376700 | orchestrator | 2025-03-11 01:28:22.376714 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-03-11 01:28:22.376725 | orchestrator | Tuesday 11 March 2025 01:26:52 +0000 (0:00:00.682) 0:03:27.370 ********* 2025-03-11 01:28:22.376735 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.376745 | orchestrator | 2025-03-11 01:28:22.376755 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-03-11 01:28:22.376765 | orchestrator | Tuesday 11 March 2025 01:26:53 +0000 (0:00:00.936) 0:03:28.306 ********* 2025-03-11 01:28:22.376775 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.376785 | orchestrator | 2025-03-11 01:28:22.376795 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-03-11 01:28:22.376806 | orchestrator | Tuesday 11 March 2025 01:26:54 +0000 (0:00:00.965) 0:03:29.272 ********* 2025-03-11 01:28:22.376816 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.376826 | orchestrator | 2025-03-11 01:28:22.376836 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-03-11 01:28:22.376849 | orchestrator | Tuesday 11 March 2025 01:26:55 +0000 (0:00:00.946) 0:03:30.219 ********* 2025-03-11 01:28:22.376860 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.376870 | orchestrator | 2025-03-11 01:28:22.376880 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-03-11 01:28:22.376890 | orchestrator | Tuesday 11 March 2025 01:26:57 +0000 (0:00:01.495) 0:03:31.715 ********* 2025-03-11 01:28:22.376900 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.376910 | orchestrator | 2025-03-11 01:28:22.376921 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-03-11 01:28:22.376931 | orchestrator | Tuesday 11 March 2025 01:26:57 +0000 (0:00:00.379) 0:03:32.095 ********* 2025-03-11 01:28:22.376941 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.376955 | orchestrator | 2025-03-11 01:28:22.376965 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-03-11 01:28:22.376975 | orchestrator | Tuesday 11 March 2025 01:26:57 +0000 (0:00:00.375) 0:03:32.471 ********* 2025-03-11 01:28:22.376985 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.376995 | orchestrator | 2025-03-11 01:28:22.377005 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-03-11 01:28:22.377015 | orchestrator | Tuesday 11 March 2025 01:26:58 +0000 (0:00:00.318) 0:03:32.789 ********* 2025-03-11 01:28:22.377025 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.377035 | orchestrator | 2025-03-11 01:28:22.377045 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-03-11 01:28:22.377056 | orchestrator | Tuesday 11 March 2025 01:26:58 +0000 (0:00:00.290) 0:03:33.079 ********* 2025-03-11 01:28:22.377069 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.377079 | orchestrator | 2025-03-11 01:28:22.377089 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-03-11 01:28:22.377099 | orchestrator | Tuesday 11 March 2025 01:27:09 +0000 (0:00:11.247) 0:03:44.326 ********* 2025-03-11 01:28:22.377109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-03-11 01:28:22.377119 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-03-11 01:28:22.377129 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-03-11 01:28:22.377139 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-03-11 01:28:22.377149 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-03-11 01:28:22.377159 | orchestrator | 2025-03-11 01:28:22.377169 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-03-11 01:28:22.377179 | orchestrator | Tuesday 11 March 2025 01:27:50 +0000 (0:00:41.209) 0:04:25.536 ********* 2025-03-11 01:28:22.377189 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.377199 | orchestrator | 2025-03-11 01:28:22.377209 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-03-11 01:28:22.377219 | orchestrator | Tuesday 11 March 2025 01:27:52 +0000 (0:00:01.574) 0:04:27.111 ********* 2025-03-11 01:28:22.377229 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.377239 | orchestrator | 2025-03-11 01:28:22.377249 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-03-11 01:28:22.377259 | orchestrator | Tuesday 11 March 2025 01:27:53 +0000 (0:00:00.988) 0:04:28.099 ********* 2025-03-11 01:28:22.377269 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:28:22.377279 | orchestrator | 2025-03-11 01:28:22.377289 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-03-11 01:28:22.377299 | orchestrator | Tuesday 11 March 2025 01:27:54 +0000 (0:00:00.951) 0:04:29.051 ********* 2025-03-11 01:28:22.377309 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.377319 | orchestrator | 2025-03-11 01:28:22.377329 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-03-11 01:28:22.377339 | orchestrator | Tuesday 11 March 2025 01:27:54 +0000 (0:00:00.316) 0:04:29.367 ********* 2025-03-11 01:28:22.377349 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-03-11 01:28:22.377359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-03-11 01:28:22.377369 | orchestrator | 2025-03-11 01:28:22.377379 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-03-11 01:28:22.377389 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:03.134) 0:04:32.501 ********* 2025-03-11 01:28:22.377399 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:22.377409 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:22.377419 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:22.377429 | orchestrator | 2025-03-11 01:28:22.377439 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-03-11 01:28:22.377449 | orchestrator | Tuesday 11 March 2025 01:27:58 +0000 (0:00:00.498) 0:04:33.000 ********* 2025-03-11 01:28:22.377478 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.377490 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.377519 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.377530 | orchestrator | 2025-03-11 01:28:22.377540 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-03-11 01:28:22.377550 | orchestrator | 2025-03-11 01:28:22.377561 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-03-11 01:28:22.377571 | orchestrator | Tuesday 11 March 2025 01:27:59 +0000 (0:00:01.197) 0:04:34.197 ********* 2025-03-11 01:28:22.377581 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:22.377596 | orchestrator | 2025-03-11 01:28:22.377607 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-03-11 01:28:22.377621 | orchestrator | Tuesday 11 March 2025 01:27:59 +0000 (0:00:00.153) 0:04:34.350 ********* 2025-03-11 01:28:22.377631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 01:28:22.377641 | orchestrator | 2025-03-11 01:28:22.377652 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-03-11 01:28:22.377662 | orchestrator | Tuesday 11 March 2025 01:28:00 +0000 (0:00:00.527) 0:04:34.878 ********* 2025-03-11 01:28:22.377672 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:22.377682 | orchestrator | 2025-03-11 01:28:22.377692 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-03-11 01:28:22.377703 | orchestrator | 2025-03-11 01:28:22.377713 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-03-11 01:28:22.377723 | orchestrator | Tuesday 11 March 2025 01:28:07 +0000 (0:00:06.873) 0:04:41.752 ********* 2025-03-11 01:28:22.377733 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:28:22.377743 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:28:22.377754 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:28:22.377764 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:22.377774 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:22.377784 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:22.377794 | orchestrator | 2025-03-11 01:28:22.377804 | orchestrator | TASK [Manage labels] *********************************************************** 2025-03-11 01:28:22.377815 | orchestrator | Tuesday 11 March 2025 01:28:08 +0000 (0:00:00.998) 0:04:42.750 ********* 2025-03-11 01:28:22.377825 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:28:22.377835 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:28:22.377845 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:28:22.377855 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:28:22.377866 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:28:22.377876 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:28:22.377886 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:28:22.377896 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:28:22.377906 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:28:22.377916 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:28:22.377926 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:28:22.377936 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:28:22.377947 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:28:22.377957 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:28:22.377967 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:28:22.377977 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:28:22.377987 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:28:22.377997 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:28:22.378007 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:28:22.378041 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:28:22.378054 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:28:22.378064 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:28:22.378074 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:28:22.378084 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:28:22.378098 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:28:22.378108 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:28:22.378118 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:28:22.378128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:28:22.378138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:28:22.378153 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:28:25.415038 | orchestrator | 2025-03-11 01:28:25.415147 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-03-11 01:28:25.415168 | orchestrator | Tuesday 11 March 2025 01:28:19 +0000 (0:00:11.164) 0:04:53.915 ********* 2025-03-11 01:28:25.415183 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:25.415200 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:25.415215 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:25.415229 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:25.415244 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:25.415259 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:25.415274 | orchestrator | 2025-03-11 01:28:25.415290 | orchestrator | TASK [Manage taints] *********************************************************** 2025-03-11 01:28:25.415305 | orchestrator | Tuesday 11 March 2025 01:28:20 +0000 (0:00:00.728) 0:04:54.643 ********* 2025-03-11 01:28:25.415319 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:28:25.415333 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:28:25.415348 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:28:25.415362 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:25.415376 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:25.415391 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:25.415405 | orchestrator | 2025-03-11 01:28:25.415420 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:28:25.415435 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:28:25.415452 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-11 01:28:25.415467 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-11 01:28:25.415481 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-11 01:28:25.415529 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:28:25.415545 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:28:25.415559 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:28:25.415573 | orchestrator | 2025-03-11 01:28:25.415615 | orchestrator | Tuesday 11 March 2025 01:28:20 +0000 (0:00:00.792) 0:04:55.435 ********* 2025-03-11 01:28:25.415631 | orchestrator | =============================================================================== 2025-03-11 01:28:25.415647 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.23s 2025-03-11 01:28:25.415664 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.21s 2025-03-11 01:28:25.415680 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 17.33s 2025-03-11 01:28:25.415695 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.45s 2025-03-11 01:28:25.415710 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 11.24s 2025-03-11 01:28:25.415725 | orchestrator | Manage labels ---------------------------------------------------------- 11.16s 2025-03-11 01:28:25.415740 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.83s 2025-03-11 01:28:25.415756 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 9.59s 2025-03-11 01:28:25.415772 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 8.15s 2025-03-11 01:28:25.415787 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 6.87s 2025-03-11 01:28:25.415803 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.42s 2025-03-11 01:28:25.415819 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.75s 2025-03-11 01:28:25.415834 | orchestrator | k3s_prereq : Load br_netfilter ------------------------------------------ 3.32s 2025-03-11 01:28:25.415850 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.24s 2025-03-11 01:28:25.415866 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.13s 2025-03-11 01:28:25.415881 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 3.02s 2025-03-11 01:28:25.415896 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.94s 2025-03-11 01:28:25.415911 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 2.56s 2025-03-11 01:28:25.415927 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.46s 2025-03-11 01:28:25.415941 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.29s 2025-03-11 01:28:25.415955 | orchestrator | 2025-03-11 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:25.415985 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task fc3bcad6-b18b-46a4-9fb7-4833e732dfab is in state STARTED 2025-03-11 01:28:25.416418 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:25.427834 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:25.429217 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:25.432810 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:25.439986 | orchestrator | 2025-03-11 01:28:25 | INFO  | Task 17cdaff7-2d7e-44cb-97c4-fe33330a67bf is in state STARTED 2025-03-11 01:28:28.507092 | orchestrator | 2025-03-11 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:28.507216 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task fc3bcad6-b18b-46a4-9fb7-4833e732dfab is in state STARTED 2025-03-11 01:28:28.508317 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:28.508906 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:28.508937 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:28.511026 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:28.511711 | orchestrator | 2025-03-11 01:28:28 | INFO  | Task 17cdaff7-2d7e-44cb-97c4-fe33330a67bf is in state STARTED 2025-03-11 01:28:31.607202 | orchestrator | 2025-03-11 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:31.744921 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task fc3bcad6-b18b-46a4-9fb7-4833e732dfab is in state STARTED 2025-03-11 01:28:34.705076 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:34.705181 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:34.705219 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:34.705235 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:34.705250 | orchestrator | 2025-03-11 01:28:31 | INFO  | Task 17cdaff7-2d7e-44cb-97c4-fe33330a67bf is in state STARTED 2025-03-11 01:28:34.705264 | orchestrator | 2025-03-11 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:34.705295 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task fc3bcad6-b18b-46a4-9fb7-4833e732dfab is in state STARTED 2025-03-11 01:28:34.710955 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:34.710988 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:34.711010 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:34.712687 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state STARTED 2025-03-11 01:28:34.716642 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:34.718688 | orchestrator | 2025-03-11 01:28:34 | INFO  | Task 17cdaff7-2d7e-44cb-97c4-fe33330a67bf is in state SUCCESS 2025-03-11 01:28:34.718834 | orchestrator | 2025-03-11 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:37.771238 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task fc3bcad6-b18b-46a4-9fb7-4833e732dfab is in state SUCCESS 2025-03-11 01:28:37.771743 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:37.772477 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:37.773485 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:37.773892 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state STARTED 2025-03-11 01:28:37.774697 | orchestrator | 2025-03-11 01:28:37 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:40.816911 | orchestrator | 2025-03-11 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:40.817023 | orchestrator | 2025-03-11 01:28:40 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:40.818743 | orchestrator | 2025-03-11 01:28:40 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:40.818782 | orchestrator | 2025-03-11 01:28:40 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:40.821341 | orchestrator | 2025-03-11 01:28:40 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state STARTED 2025-03-11 01:28:43.862795 | orchestrator | 2025-03-11 01:28:40 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:43.862915 | orchestrator | 2025-03-11 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:43.862950 | orchestrator | 2025-03-11 01:28:43 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:43.863183 | orchestrator | 2025-03-11 01:28:43 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:43.864131 | orchestrator | 2025-03-11 01:28:43 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:43.868342 | orchestrator | 2025-03-11 01:28:43 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state STARTED 2025-03-11 01:28:43.869342 | orchestrator | 2025-03-11 01:28:43 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:46.914686 | orchestrator | 2025-03-11 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:46.914815 | orchestrator | 2025-03-11 01:28:46 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:46.915306 | orchestrator | 2025-03-11 01:28:46 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:46.915337 | orchestrator | 2025-03-11 01:28:46 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:46.915360 | orchestrator | 2025-03-11 01:28:46 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state STARTED 2025-03-11 01:28:46.915881 | orchestrator | 2025-03-11 01:28:46 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:46.915987 | orchestrator | 2025-03-11 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:49.955847 | orchestrator | 2025-03-11 01:28:49 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state STARTED 2025-03-11 01:28:49.956058 | orchestrator | 2025-03-11 01:28:49 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:49.956602 | orchestrator | 2025-03-11 01:28:49 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:49.957099 | orchestrator | 2025-03-11 01:28:49 | INFO  | Task b3e1df3b-8df4-44df-b234-ee8e24059644 is in state SUCCESS 2025-03-11 01:28:49.959797 | orchestrator | 2025-03-11 01:28:49 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:53.008734 | orchestrator | 2025-03-11 01:28:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:53.008861 | orchestrator | 2025-03-11 01:28:53 | INFO  | Task f439f903-ee64-4b65-8d05-396dd3a85a08 is in state SUCCESS 2025-03-11 01:28:53.011052 | orchestrator | 2025-03-11 01:28:53.011091 | orchestrator | 2025-03-11 01:28:53.011106 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-03-11 01:28:53.011138 | orchestrator | 2025-03-11 01:28:53.011158 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:28:53.011173 | orchestrator | Tuesday 11 March 2025 01:28:28 +0000 (0:00:00.232) 0:00:00.233 ********* 2025-03-11 01:28:53.011188 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:28:53.011202 | orchestrator | 2025-03-11 01:28:53.011216 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:28:53.011231 | orchestrator | Tuesday 11 March 2025 01:28:29 +0000 (0:00:00.912) 0:00:01.145 ********* 2025-03-11 01:28:53.011245 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:53.011284 | orchestrator | 2025-03-11 01:28:53.011299 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-03-11 01:28:53.011314 | orchestrator | Tuesday 11 March 2025 01:28:31 +0000 (0:00:02.065) 0:00:03.211 ********* 2025-03-11 01:28:53.011329 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:53.011344 | orchestrator | 2025-03-11 01:28:53.011358 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:28:53.011374 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:28:53.011390 | orchestrator | 2025-03-11 01:28:53.011405 | orchestrator | Tuesday 11 March 2025 01:28:32 +0000 (0:00:00.952) 0:00:04.163 ********* 2025-03-11 01:28:53.011420 | orchestrator | =============================================================================== 2025-03-11 01:28:53.011435 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.07s 2025-03-11 01:28:53.011450 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.95s 2025-03-11 01:28:53.011465 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.91s 2025-03-11 01:28:53.011480 | orchestrator | 2025-03-11 01:28:53.011495 | orchestrator | 2025-03-11 01:28:53.011548 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-11 01:28:53.011563 | orchestrator | 2025-03-11 01:28:53.011577 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-11 01:28:53.011591 | orchestrator | Tuesday 11 March 2025 01:28:27 +0000 (0:00:00.197) 0:00:00.197 ********* 2025-03-11 01:28:53.011605 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:53.011620 | orchestrator | 2025-03-11 01:28:53.011634 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-11 01:28:53.011650 | orchestrator | Tuesday 11 March 2025 01:28:28 +0000 (0:00:00.891) 0:00:01.088 ********* 2025-03-11 01:28:53.011666 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:53.011682 | orchestrator | 2025-03-11 01:28:53.011699 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:28:53.011715 | orchestrator | Tuesday 11 March 2025 01:28:29 +0000 (0:00:00.715) 0:00:01.804 ********* 2025-03-11 01:28:53.011730 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:28:53.011746 | orchestrator | 2025-03-11 01:28:53.011762 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:28:53.011778 | orchestrator | Tuesday 11 March 2025 01:28:29 +0000 (0:00:00.737) 0:00:02.541 ********* 2025-03-11 01:28:53.011794 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:53.011810 | orchestrator | 2025-03-11 01:28:53.011825 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-11 01:28:53.011842 | orchestrator | Tuesday 11 March 2025 01:28:32 +0000 (0:00:02.228) 0:00:04.769 ********* 2025-03-11 01:28:53.011857 | orchestrator | changed: [testbed-manager] 2025-03-11 01:28:53.011873 | orchestrator | 2025-03-11 01:28:53.011888 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-11 01:28:53.011905 | orchestrator | Tuesday 11 March 2025 01:28:33 +0000 (0:00:00.949) 0:00:05.719 ********* 2025-03-11 01:28:53.011921 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:28:53.011937 | orchestrator | 2025-03-11 01:28:53.011953 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-11 01:28:53.011968 | orchestrator | Tuesday 11 March 2025 01:28:34 +0000 (0:00:01.313) 0:00:07.033 ********* 2025-03-11 01:28:53.011985 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:28:53.012000 | orchestrator | 2025-03-11 01:28:53.012014 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-11 01:28:53.012028 | orchestrator | Tuesday 11 March 2025 01:28:35 +0000 (0:00:00.685) 0:00:07.718 ********* 2025-03-11 01:28:53.012042 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:53.012056 | orchestrator | 2025-03-11 01:28:53.012071 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-11 01:28:53.012093 | orchestrator | Tuesday 11 March 2025 01:28:35 +0000 (0:00:00.696) 0:00:08.415 ********* 2025-03-11 01:28:53.012107 | orchestrator | ok: [testbed-manager] 2025-03-11 01:28:53.012121 | orchestrator | 2025-03-11 01:28:53.012141 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:28:53.012155 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:28:53.012170 | orchestrator | 2025-03-11 01:28:53.012184 | orchestrator | Tuesday 11 March 2025 01:28:36 +0000 (0:00:00.421) 0:00:08.836 ********* 2025-03-11 01:28:53.012198 | orchestrator | =============================================================================== 2025-03-11 01:28:53.012212 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.23s 2025-03-11 01:28:53.012226 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.31s 2025-03-11 01:28:53.012240 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.95s 2025-03-11 01:28:53.012254 | orchestrator | Get home directory of operator user ------------------------------------- 0.89s 2025-03-11 01:28:53.012268 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2025-03-11 01:28:53.012293 | orchestrator | Create .kube directory -------------------------------------------------- 0.72s 2025-03-11 01:28:53.012307 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.70s 2025-03-11 01:28:53.012321 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.69s 2025-03-11 01:28:53.012335 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.42s 2025-03-11 01:28:53.012349 | orchestrator | 2025-03-11 01:28:53.012363 | orchestrator | None 2025-03-11 01:28:53.012377 | orchestrator | 2025-03-11 01:28:53.012391 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-11 01:28:53.012405 | orchestrator | 2025-03-11 01:28:53.012419 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-11 01:28:53.012433 | orchestrator | Tuesday 11 March 2025 01:25:55 +0000 (0:00:00.592) 0:00:00.592 ********* 2025-03-11 01:28:53.012447 | orchestrator | ok: [localhost] => { 2025-03-11 01:28:53.012462 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-11 01:28:53.012476 | orchestrator | } 2025-03-11 01:28:53.012491 | orchestrator | 2025-03-11 01:28:53.012524 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-11 01:28:53.012539 | orchestrator | Tuesday 11 March 2025 01:25:55 +0000 (0:00:00.155) 0:00:00.747 ********* 2025-03-11 01:28:53.012554 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-11 01:28:53.012569 | orchestrator | ...ignoring 2025-03-11 01:28:53.012583 | orchestrator | 2025-03-11 01:28:53.012597 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-11 01:28:53.012611 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:03.475) 0:00:04.223 ********* 2025-03-11 01:28:53.012625 | orchestrator | skipping: [localhost] 2025-03-11 01:28:53.012639 | orchestrator | 2025-03-11 01:28:53.012653 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-11 01:28:53.012667 | orchestrator | Tuesday 11 March 2025 01:25:59 +0000 (0:00:00.230) 0:00:04.453 ********* 2025-03-11 01:28:53.012681 | orchestrator | ok: [localhost] 2025-03-11 01:28:53.012695 | orchestrator | 2025-03-11 01:28:53.012709 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:28:53.012723 | orchestrator | 2025-03-11 01:28:53.012737 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:28:53.012751 | orchestrator | Tuesday 11 March 2025 01:25:59 +0000 (0:00:00.411) 0:00:04.865 ********* 2025-03-11 01:28:53.012765 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:53.012779 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:53.012800 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:53.012814 | orchestrator | 2025-03-11 01:28:53.012828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:28:53.012842 | orchestrator | Tuesday 11 March 2025 01:26:00 +0000 (0:00:01.436) 0:00:06.302 ********* 2025-03-11 01:28:53.012857 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-11 01:28:53.012871 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-11 01:28:53.012885 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-11 01:28:53.012899 | orchestrator | 2025-03-11 01:28:53.012913 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-11 01:28:53.012927 | orchestrator | 2025-03-11 01:28:53.012941 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:28:53.012955 | orchestrator | Tuesday 11 March 2025 01:26:01 +0000 (0:00:01.114) 0:00:07.416 ********* 2025-03-11 01:28:53.012970 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:28:53.012984 | orchestrator | 2025-03-11 01:28:53.012998 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-11 01:28:53.013012 | orchestrator | Tuesday 11 March 2025 01:26:03 +0000 (0:00:01.112) 0:00:08.529 ********* 2025-03-11 01:28:53.013026 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:53.013040 | orchestrator | 2025-03-11 01:28:53.013054 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-11 01:28:53.013073 | orchestrator | Tuesday 11 March 2025 01:26:04 +0000 (0:00:01.284) 0:00:09.813 ********* 2025-03-11 01:28:53.013087 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013101 | orchestrator | 2025-03-11 01:28:53.013115 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-11 01:28:53.013129 | orchestrator | Tuesday 11 March 2025 01:26:04 +0000 (0:00:00.439) 0:00:10.252 ********* 2025-03-11 01:28:53.013143 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013157 | orchestrator | 2025-03-11 01:28:53.013286 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-11 01:28:53.013308 | orchestrator | Tuesday 11 March 2025 01:26:06 +0000 (0:00:01.349) 0:00:11.602 ********* 2025-03-11 01:28:53.013323 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013337 | orchestrator | 2025-03-11 01:28:53.013351 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-11 01:28:53.013365 | orchestrator | Tuesday 11 March 2025 01:26:06 +0000 (0:00:00.472) 0:00:12.075 ********* 2025-03-11 01:28:53.013379 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013393 | orchestrator | 2025-03-11 01:28:53.013407 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:28:53.013421 | orchestrator | Tuesday 11 March 2025 01:26:07 +0000 (0:00:00.593) 0:00:12.668 ********* 2025-03-11 01:28:53.013435 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:28:53.013449 | orchestrator | 2025-03-11 01:28:53.013463 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-11 01:28:53.013478 | orchestrator | Tuesday 11 March 2025 01:26:09 +0000 (0:00:02.186) 0:00:14.855 ********* 2025-03-11 01:28:53.013492 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:53.013526 | orchestrator | 2025-03-11 01:28:53.013540 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-11 01:28:53.013563 | orchestrator | Tuesday 11 March 2025 01:26:10 +0000 (0:00:01.175) 0:00:16.031 ********* 2025-03-11 01:28:53.013578 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013592 | orchestrator | 2025-03-11 01:28:53.013606 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-11 01:28:53.013620 | orchestrator | Tuesday 11 March 2025 01:26:10 +0000 (0:00:00.412) 0:00:16.443 ********* 2025-03-11 01:28:53.013634 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.013655 | orchestrator | 2025-03-11 01:28:53.013669 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-11 01:28:53.013691 | orchestrator | Tuesday 11 March 2025 01:26:11 +0000 (0:00:00.526) 0:00:16.969 ********* 2025-03-11 01:28:53.013710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013762 | orchestrator | 2025-03-11 01:28:53.013776 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-11 01:28:53.013790 | orchestrator | Tuesday 11 March 2025 01:26:12 +0000 (0:00:01.222) 0:00:18.191 ********* 2025-03-11 01:28:53.013814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.013868 | orchestrator | 2025-03-11 01:28:53.013883 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-11 01:28:53.013897 | orchestrator | Tuesday 11 March 2025 01:26:15 +0000 (0:00:02.960) 0:00:21.152 ********* 2025-03-11 01:28:53.013912 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:28:53.013927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:28:53.013943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:28:53.013959 | orchestrator | 2025-03-11 01:28:53.013975 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-11 01:28:53.013991 | orchestrator | Tuesday 11 March 2025 01:26:19 +0000 (0:00:03.791) 0:00:24.943 ********* 2025-03-11 01:28:53.014007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:28:53.014073 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:28:53.014097 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:28:53.014113 | orchestrator | 2025-03-11 01:28:53.014129 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-11 01:28:53.014145 | orchestrator | Tuesday 11 March 2025 01:26:25 +0000 (0:00:06.200) 0:00:31.144 ********* 2025-03-11 01:28:53.014168 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:28:53.014185 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:28:53.014201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:28:53.014216 | orchestrator | 2025-03-11 01:28:53.014232 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-11 01:28:53.014248 | orchestrator | Tuesday 11 March 2025 01:26:30 +0000 (0:00:04.545) 0:00:35.690 ********* 2025-03-11 01:28:53.014264 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:28:53.014278 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:28:53.014292 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:28:53.014306 | orchestrator | 2025-03-11 01:28:53.014320 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-11 01:28:53.014334 | orchestrator | Tuesday 11 March 2025 01:26:35 +0000 (0:00:04.816) 0:00:40.507 ********* 2025-03-11 01:28:53.014348 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:28:53.014362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:28:53.014376 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:28:53.014390 | orchestrator | 2025-03-11 01:28:53.014409 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-11 01:28:53.014423 | orchestrator | Tuesday 11 March 2025 01:26:38 +0000 (0:00:03.686) 0:00:44.194 ********* 2025-03-11 01:28:53.014437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:28:53.014451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:28:53.014465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:28:53.014479 | orchestrator | 2025-03-11 01:28:53.014493 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:28:53.014553 | orchestrator | Tuesday 11 March 2025 01:26:42 +0000 (0:00:03.561) 0:00:47.755 ********* 2025-03-11 01:28:53.014568 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.014582 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:53.014596 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:53.014611 | orchestrator | 2025-03-11 01:28:53.014625 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-11 01:28:53.014639 | orchestrator | Tuesday 11 March 2025 01:26:43 +0000 (0:00:00.717) 0:00:48.473 ********* 2025-03-11 01:28:53.014654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.014687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.014703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:28:53.014718 | orchestrator | 2025-03-11 01:28:53.014732 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-11 01:28:53.014746 | orchestrator | Tuesday 11 March 2025 01:26:45 +0000 (0:00:02.095) 0:00:50.568 ********* 2025-03-11 01:28:53.014760 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:53.014774 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:53.014788 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:53.014802 | orchestrator | 2025-03-11 01:28:53.014816 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-11 01:28:53.014831 | orchestrator | Tuesday 11 March 2025 01:26:46 +0000 (0:00:01.113) 0:00:51.682 ********* 2025-03-11 01:28:53.014845 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:53.014859 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:53.014873 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:53.014887 | orchestrator | 2025-03-11 01:28:53.015002 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-11 01:28:53.015019 | orchestrator | Tuesday 11 March 2025 01:26:54 +0000 (0:00:08.272) 0:00:59.954 ********* 2025-03-11 01:28:53.015033 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:53.015047 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:53.015061 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:53.015075 | orchestrator | 2025-03-11 01:28:53.015089 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:28:53.015103 | orchestrator | 2025-03-11 01:28:53.015126 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:28:53.015140 | orchestrator | Tuesday 11 March 2025 01:26:56 +0000 (0:00:01.623) 0:01:01.578 ********* 2025-03-11 01:28:53.015154 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:53.015168 | orchestrator | 2025-03-11 01:28:53.015182 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:28:53.015196 | orchestrator | Tuesday 11 March 2025 01:26:58 +0000 (0:00:02.754) 0:01:04.332 ********* 2025-03-11 01:28:53.015210 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:28:53.015224 | orchestrator | 2025-03-11 01:28:53.015237 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:28:53.015251 | orchestrator | Tuesday 11 March 2025 01:26:59 +0000 (0:00:00.810) 0:01:05.143 ********* 2025-03-11 01:28:53.015265 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:53.015279 | orchestrator | 2025-03-11 01:28:53.015293 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:28:53.015306 | orchestrator | Tuesday 11 March 2025 01:27:07 +0000 (0:00:07.668) 0:01:12.811 ********* 2025-03-11 01:28:53.015320 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:28:53.015334 | orchestrator | 2025-03-11 01:28:53.015348 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:28:53.015362 | orchestrator | 2025-03-11 01:28:53.015376 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:28:53.015390 | orchestrator | Tuesday 11 March 2025 01:27:59 +0000 (0:00:52.446) 0:02:05.257 ********* 2025-03-11 01:28:53.015404 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:53.015418 | orchestrator | 2025-03-11 01:28:53.015431 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:28:53.015445 | orchestrator | Tuesday 11 March 2025 01:28:00 +0000 (0:00:01.038) 0:02:06.296 ********* 2025-03-11 01:28:53.015459 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:28:53.015473 | orchestrator | 2025-03-11 01:28:53.015486 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:28:53.015517 | orchestrator | Tuesday 11 March 2025 01:28:01 +0000 (0:00:00.561) 0:02:06.858 ********* 2025-03-11 01:28:53.015532 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:53.015547 | orchestrator | 2025-03-11 01:28:53.015561 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:28:53.015574 | orchestrator | Tuesday 11 March 2025 01:28:03 +0000 (0:00:02.527) 0:02:09.385 ********* 2025-03-11 01:28:53.015588 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:28:53.015602 | orchestrator | 2025-03-11 01:28:53.015616 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:28:53.015632 | orchestrator | 2025-03-11 01:28:53.015648 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:28:53.015676 | orchestrator | Tuesday 11 March 2025 01:28:21 +0000 (0:00:17.520) 0:02:26.905 ********* 2025-03-11 01:28:53.015692 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:53.015708 | orchestrator | 2025-03-11 01:28:53.015731 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:28:53.015747 | orchestrator | Tuesday 11 March 2025 01:28:22 +0000 (0:00:00.886) 0:02:27.792 ********* 2025-03-11 01:28:53.015763 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:28:53.015779 | orchestrator | 2025-03-11 01:28:53.015794 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:28:53.015810 | orchestrator | Tuesday 11 March 2025 01:28:23 +0000 (0:00:00.934) 0:02:28.726 ********* 2025-03-11 01:28:53.015826 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:53.015842 | orchestrator | 2025-03-11 01:28:53.015856 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:28:53.015870 | orchestrator | Tuesday 11 March 2025 01:28:26 +0000 (0:00:03.579) 0:02:32.306 ********* 2025-03-11 01:28:53.015884 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:28:53.015898 | orchestrator | 2025-03-11 01:28:53.015912 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-11 01:28:53.015933 | orchestrator | 2025-03-11 01:28:53.015948 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-11 01:28:53.015962 | orchestrator | Tuesday 11 March 2025 01:28:43 +0000 (0:00:17.044) 0:02:49.350 ********* 2025-03-11 01:28:53.015976 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:28:53.015990 | orchestrator | 2025-03-11 01:28:53.016004 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-11 01:28:53.016017 | orchestrator | Tuesday 11 March 2025 01:28:46 +0000 (0:00:02.881) 0:02:52.232 ********* 2025-03-11 01:28:53.016031 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-11 01:28:53.016045 | orchestrator | enable_outward_rabbitmq_True 2025-03-11 01:28:53.016060 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-11 01:28:53.016074 | orchestrator | outward_rabbitmq_restart 2025-03-11 01:28:53.016088 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:28:53.016102 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:28:53.016116 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:28:53.016130 | orchestrator | 2025-03-11 01:28:53.016144 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-11 01:28:53.016158 | orchestrator | skipping: no hosts matched 2025-03-11 01:28:53.016172 | orchestrator | 2025-03-11 01:28:53.016186 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-11 01:28:53.016200 | orchestrator | skipping: no hosts matched 2025-03-11 01:28:53.016214 | orchestrator | 2025-03-11 01:28:53.016228 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-11 01:28:53.016242 | orchestrator | skipping: no hosts matched 2025-03-11 01:28:53.016256 | orchestrator | 2025-03-11 01:28:53.016270 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:28:53.016285 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-11 01:28:53.016299 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-11 01:28:53.016313 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:28:53.016327 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:28:53.016341 | orchestrator | 2025-03-11 01:28:53.016356 | orchestrator | 2025-03-11 01:28:53.016369 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:28:53.016383 | orchestrator | Tuesday 11 March 2025 01:28:50 +0000 (0:00:03.263) 0:02:55.495 ********* 2025-03-11 01:28:53.016397 | orchestrator | =============================================================================== 2025-03-11 01:28:53.016411 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.01s 2025-03-11 01:28:53.016425 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 13.78s 2025-03-11 01:28:53.016439 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.27s 2025-03-11 01:28:53.016453 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 6.20s 2025-03-11 01:28:53.016466 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.82s 2025-03-11 01:28:53.016480 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 4.68s 2025-03-11 01:28:53.016494 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 4.54s 2025-03-11 01:28:53.016562 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.79s 2025-03-11 01:28:53.016577 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 3.69s 2025-03-11 01:28:53.016599 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 3.56s 2025-03-11 01:28:53.016613 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.48s 2025-03-11 01:28:53.016632 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.26s 2025-03-11 01:28:53.016646 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.96s 2025-03-11 01:28:53.016660 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 2.88s 2025-03-11 01:28:53.016674 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 2.31s 2025-03-11 01:28:53.016687 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.19s 2025-03-11 01:28:53.016701 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.10s 2025-03-11 01:28:53.016721 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.62s 2025-03-11 01:28:53.017013 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.44s 2025-03-11 01:28:53.017036 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.35s 2025-03-11 01:28:53.017049 | orchestrator | 2025-03-11 01:28:53 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:53.017062 | orchestrator | 2025-03-11 01:28:53 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:53.017080 | orchestrator | 2025-03-11 01:28:53 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:56.087153 | orchestrator | 2025-03-11 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:56.087269 | orchestrator | 2025-03-11 01:28:56 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:59.189983 | orchestrator | 2025-03-11 01:28:56 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:59.190112 | orchestrator | 2025-03-11 01:28:56 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:28:59.190132 | orchestrator | 2025-03-11 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:28:59.190163 | orchestrator | 2025-03-11 01:28:59 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:28:59.191874 | orchestrator | 2025-03-11 01:28:59 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:28:59.195298 | orchestrator | 2025-03-11 01:28:59 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:02.279600 | orchestrator | 2025-03-11 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:02.279742 | orchestrator | 2025-03-11 01:29:02 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:02.281277 | orchestrator | 2025-03-11 01:29:02 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:02.287320 | orchestrator | 2025-03-11 01:29:02 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:05.330219 | orchestrator | 2025-03-11 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:05.330344 | orchestrator | 2025-03-11 01:29:05 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:05.333848 | orchestrator | 2025-03-11 01:29:05 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:05.340283 | orchestrator | 2025-03-11 01:29:05 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:08.393367 | orchestrator | 2025-03-11 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:08.393499 | orchestrator | 2025-03-11 01:29:08 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:08.393821 | orchestrator | 2025-03-11 01:29:08 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:08.393855 | orchestrator | 2025-03-11 01:29:08 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:11.449568 | orchestrator | 2025-03-11 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:11.449710 | orchestrator | 2025-03-11 01:29:11 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:11.451208 | orchestrator | 2025-03-11 01:29:11 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:11.451256 | orchestrator | 2025-03-11 01:29:11 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:14.493106 | orchestrator | 2025-03-11 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:14.493239 | orchestrator | 2025-03-11 01:29:14 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:14.494759 | orchestrator | 2025-03-11 01:29:14 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:14.497777 | orchestrator | 2025-03-11 01:29:14 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:14.498230 | orchestrator | 2025-03-11 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:17.553606 | orchestrator | 2025-03-11 01:29:17 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:17.555620 | orchestrator | 2025-03-11 01:29:17 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:17.555668 | orchestrator | 2025-03-11 01:29:17 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:20.601393 | orchestrator | 2025-03-11 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:20.601553 | orchestrator | 2025-03-11 01:29:20 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:20.603110 | orchestrator | 2025-03-11 01:29:20 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:20.605116 | orchestrator | 2025-03-11 01:29:20 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:20.605406 | orchestrator | 2025-03-11 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:23.652235 | orchestrator | 2025-03-11 01:29:23 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:23.653740 | orchestrator | 2025-03-11 01:29:23 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:23.653796 | orchestrator | 2025-03-11 01:29:23 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:26.700760 | orchestrator | 2025-03-11 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:26.700985 | orchestrator | 2025-03-11 01:29:26 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:29.746313 | orchestrator | 2025-03-11 01:29:26 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:29.746410 | orchestrator | 2025-03-11 01:29:26 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:29.746427 | orchestrator | 2025-03-11 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:29.746457 | orchestrator | 2025-03-11 01:29:29 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:29.746902 | orchestrator | 2025-03-11 01:29:29 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:29.748046 | orchestrator | 2025-03-11 01:29:29 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:32.803188 | orchestrator | 2025-03-11 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:32.803307 | orchestrator | 2025-03-11 01:29:32 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:32.803878 | orchestrator | 2025-03-11 01:29:32 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:32.804682 | orchestrator | 2025-03-11 01:29:32 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:35.856998 | orchestrator | 2025-03-11 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:35.857103 | orchestrator | 2025-03-11 01:29:35 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:35.857957 | orchestrator | 2025-03-11 01:29:35 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:38.915491 | orchestrator | 2025-03-11 01:29:35 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:38.915632 | orchestrator | 2025-03-11 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:38.915669 | orchestrator | 2025-03-11 01:29:38 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:38.917147 | orchestrator | 2025-03-11 01:29:38 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:38.917182 | orchestrator | 2025-03-11 01:29:38 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:38.917390 | orchestrator | 2025-03-11 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:41.963287 | orchestrator | 2025-03-11 01:29:41 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:41.964362 | orchestrator | 2025-03-11 01:29:41 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:41.965865 | orchestrator | 2025-03-11 01:29:41 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:45.034100 | orchestrator | 2025-03-11 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:45.034226 | orchestrator | 2025-03-11 01:29:45 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:48.071967 | orchestrator | 2025-03-11 01:29:45 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:48.072075 | orchestrator | 2025-03-11 01:29:45 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:48.072093 | orchestrator | 2025-03-11 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:48.072125 | orchestrator | 2025-03-11 01:29:48 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:48.075326 | orchestrator | 2025-03-11 01:29:48 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:48.076680 | orchestrator | 2025-03-11 01:29:48 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:51.120837 | orchestrator | 2025-03-11 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:51.120929 | orchestrator | 2025-03-11 01:29:51 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:51.122735 | orchestrator | 2025-03-11 01:29:51 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:51.124600 | orchestrator | 2025-03-11 01:29:51 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:54.167064 | orchestrator | 2025-03-11 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:54.167121 | orchestrator | 2025-03-11 01:29:54 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:54.169054 | orchestrator | 2025-03-11 01:29:54 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:54.170306 | orchestrator | 2025-03-11 01:29:54 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:57.217981 | orchestrator | 2025-03-11 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:29:57.218057 | orchestrator | 2025-03-11 01:29:57 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:29:57.219018 | orchestrator | 2025-03-11 01:29:57 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:29:57.219069 | orchestrator | 2025-03-11 01:29:57 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:29:57.219228 | orchestrator | 2025-03-11 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:00.266854 | orchestrator | 2025-03-11 01:30:00 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:00.267397 | orchestrator | 2025-03-11 01:30:00 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:00.267443 | orchestrator | 2025-03-11 01:30:00 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:30:03.319578 | orchestrator | 2025-03-11 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:03.319814 | orchestrator | 2025-03-11 01:30:03 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:03.321120 | orchestrator | 2025-03-11 01:30:03 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:03.321155 | orchestrator | 2025-03-11 01:30:03 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:30:06.383928 | orchestrator | 2025-03-11 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:06.384041 | orchestrator | 2025-03-11 01:30:06 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:06.387178 | orchestrator | 2025-03-11 01:30:06 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:09.439937 | orchestrator | 2025-03-11 01:30:06 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:30:09.440049 | orchestrator | 2025-03-11 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:09.440083 | orchestrator | 2025-03-11 01:30:09 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:09.441354 | orchestrator | 2025-03-11 01:30:09 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:12.494833 | orchestrator | 2025-03-11 01:30:09 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:30:12.494953 | orchestrator | 2025-03-11 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:12.494991 | orchestrator | 2025-03-11 01:30:12 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:15.534715 | orchestrator | 2025-03-11 01:30:12 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:15.534833 | orchestrator | 2025-03-11 01:30:12 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state STARTED 2025-03-11 01:30:15.534879 | orchestrator | 2025-03-11 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:15.534913 | orchestrator | 2025-03-11 01:30:15 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:15.537560 | orchestrator | 2025-03-11 01:30:15 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:15.540048 | orchestrator | 2025-03-11 01:30:15 | INFO  | Task 348b0b65-518e-4268-926b-9c107657852a is in state SUCCESS 2025-03-11 01:30:15.540675 | orchestrator | 2025-03-11 01:30:15.540710 | orchestrator | 2025-03-11 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:15.542612 | orchestrator | 2025-03-11 01:30:15.542654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:30:15.542671 | orchestrator | 2025-03-11 01:30:15.542686 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:30:15.542701 | orchestrator | Tuesday 11 March 2025 01:27:12 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-03-11 01:30:15.542716 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:30:15.542731 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:30:15.542745 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:30:15.542760 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.542774 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.542788 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.542802 | orchestrator | 2025-03-11 01:30:15.542817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:30:15.542831 | orchestrator | Tuesday 11 March 2025 01:27:13 +0000 (0:00:00.744) 0:00:01.001 ********* 2025-03-11 01:30:15.542845 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-11 01:30:15.542873 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-11 01:30:15.542888 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-11 01:30:15.542903 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-11 01:30:15.542917 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-11 01:30:15.542931 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-11 01:30:15.542945 | orchestrator | 2025-03-11 01:30:15.542959 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-11 01:30:15.542974 | orchestrator | 2025-03-11 01:30:15.542988 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-11 01:30:15.543002 | orchestrator | Tuesday 11 March 2025 01:27:15 +0000 (0:00:01.953) 0:00:02.954 ********* 2025-03-11 01:30:15.543017 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:30:15.543032 | orchestrator | 2025-03-11 01:30:15.543047 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-11 01:30:15.543061 | orchestrator | Tuesday 11 March 2025 01:27:18 +0000 (0:00:02.974) 0:00:05.929 ********* 2025-03-11 01:30:15.543085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543218 | orchestrator | 2025-03-11 01:30:15.543234 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-11 01:30:15.543250 | orchestrator | Tuesday 11 March 2025 01:27:19 +0000 (0:00:01.770) 0:00:07.699 ********* 2025-03-11 01:30:15.543265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543373 | orchestrator | 2025-03-11 01:30:15.543389 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-11 01:30:15.543405 | orchestrator | Tuesday 11 March 2025 01:27:23 +0000 (0:00:03.467) 0:00:11.166 ********* 2025-03-11 01:30:15.543421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543555 | orchestrator | 2025-03-11 01:30:15.543571 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-11 01:30:15.543593 | orchestrator | Tuesday 11 March 2025 01:27:24 +0000 (0:00:01.328) 0:00:12.495 ********* 2025-03-11 01:30:15.543607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543705 | orchestrator | 2025-03-11 01:30:15.543719 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-11 01:30:15.543734 | orchestrator | Tuesday 11 March 2025 01:27:27 +0000 (0:00:03.011) 0:00:15.506 ********* 2025-03-11 01:30:15.543753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.543848 | orchestrator | 2025-03-11 01:30:15.543863 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-11 01:30:15.543877 | orchestrator | Tuesday 11 March 2025 01:27:29 +0000 (0:00:02.004) 0:00:17.511 ********* 2025-03-11 01:30:15.543892 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:30:15.543906 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:30:15.543921 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:30:15.543935 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.543949 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.543963 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.543978 | orchestrator | 2025-03-11 01:30:15.543992 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-11 01:30:15.544007 | orchestrator | Tuesday 11 March 2025 01:27:33 +0000 (0:00:03.848) 0:00:21.360 ********* 2025-03-11 01:30:15.544022 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-11 01:30:15.544036 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-11 01:30:15.544050 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-11 01:30:15.544070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-11 01:30:15.544085 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-11 01:30:15.544114 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-11 01:30:15.544132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544147 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544161 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544213 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:30:15.544227 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544271 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544286 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:30:15.544329 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544343 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544358 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544372 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:30:15.544420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544478 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544493 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:30:15.544521 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544564 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:30:15.544580 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:30:15.544594 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544609 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:30:15.544623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:30:15.544645 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:30:15.544676 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-11 01:30:15.544694 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-11 01:30:15.544708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:30:15.544723 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-11 01:30:15.544737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:30:15.544752 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-11 01:30:15.544766 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:30:15.544780 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-11 01:30:15.544795 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:30:15.544809 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:30:15.544823 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-11 01:30:15.544837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:30:15.544852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:30:15.544866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:30:15.544881 | orchestrator | 2025-03-11 01:30:15.544895 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.544910 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:23.631) 0:00:44.992 ********* 2025-03-11 01:30:15.544924 | orchestrator | 2025-03-11 01:30:15.544939 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.544953 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:00.144) 0:00:45.137 ********* 2025-03-11 01:30:15.544967 | orchestrator | 2025-03-11 01:30:15.544981 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.544995 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:00.361) 0:00:45.499 ********* 2025-03-11 01:30:15.545009 | orchestrator | 2025-03-11 01:30:15.545023 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.545038 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:00.151) 0:00:45.651 ********* 2025-03-11 01:30:15.545052 | orchestrator | 2025-03-11 01:30:15.545066 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.545080 | orchestrator | Tuesday 11 March 2025 01:27:57 +0000 (0:00:00.177) 0:00:45.829 ********* 2025-03-11 01:30:15.545094 | orchestrator | 2025-03-11 01:30:15.545108 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:30:15.545122 | orchestrator | Tuesday 11 March 2025 01:27:58 +0000 (0:00:00.190) 0:00:46.019 ********* 2025-03-11 01:30:15.545136 | orchestrator | 2025-03-11 01:30:15.545150 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-11 01:30:15.545172 | orchestrator | Tuesday 11 March 2025 01:27:58 +0000 (0:00:00.379) 0:00:46.399 ********* 2025-03-11 01:30:15.545186 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:30:15.545200 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545215 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:30:15.545229 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:30:15.545243 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545257 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545271 | orchestrator | 2025-03-11 01:30:15.545286 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-11 01:30:15.545300 | orchestrator | Tuesday 11 March 2025 01:28:02 +0000 (0:00:03.970) 0:00:50.370 ********* 2025-03-11 01:30:15.545315 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.545334 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:30:15.545349 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:30:15.545363 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.545377 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.545391 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:30:15.545405 | orchestrator | 2025-03-11 01:30:15.545420 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-11 01:30:15.545434 | orchestrator | 2025-03-11 01:30:15.545449 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:30:15.545463 | orchestrator | Tuesday 11 March 2025 01:28:22 +0000 (0:00:20.034) 0:01:10.405 ********* 2025-03-11 01:30:15.545477 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:30:15.545491 | orchestrator | 2025-03-11 01:30:15.545505 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:30:15.545520 | orchestrator | Tuesday 11 March 2025 01:28:24 +0000 (0:00:01.878) 0:01:12.284 ********* 2025-03-11 01:30:15.545588 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:30:15.545603 | orchestrator | 2025-03-11 01:30:15.545627 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-11 01:30:15.545641 | orchestrator | Tuesday 11 March 2025 01:28:27 +0000 (0:00:02.889) 0:01:15.174 ********* 2025-03-11 01:30:15.545654 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545666 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545679 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545691 | orchestrator | 2025-03-11 01:30:15.545703 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-11 01:30:15.545716 | orchestrator | Tuesday 11 March 2025 01:28:29 +0000 (0:00:01.997) 0:01:17.171 ********* 2025-03-11 01:30:15.545728 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545740 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545752 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545765 | orchestrator | 2025-03-11 01:30:15.545778 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-11 01:30:15.545790 | orchestrator | Tuesday 11 March 2025 01:28:30 +0000 (0:00:00.982) 0:01:18.154 ********* 2025-03-11 01:30:15.545803 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545815 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545828 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545840 | orchestrator | 2025-03-11 01:30:15.545852 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-11 01:30:15.545865 | orchestrator | Tuesday 11 March 2025 01:28:32 +0000 (0:00:01.965) 0:01:20.120 ********* 2025-03-11 01:30:15.545877 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545889 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545901 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545914 | orchestrator | 2025-03-11 01:30:15.545926 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-11 01:30:15.545938 | orchestrator | Tuesday 11 March 2025 01:28:33 +0000 (0:00:00.913) 0:01:21.033 ********* 2025-03-11 01:30:15.545951 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.545970 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.545982 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.545995 | orchestrator | 2025-03-11 01:30:15.546007 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-11 01:30:15.546055 | orchestrator | Tuesday 11 March 2025 01:28:33 +0000 (0:00:00.672) 0:01:21.705 ********* 2025-03-11 01:30:15.546070 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546083 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546095 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546108 | orchestrator | 2025-03-11 01:30:15.546120 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-11 01:30:15.546133 | orchestrator | Tuesday 11 March 2025 01:28:34 +0000 (0:00:00.947) 0:01:22.653 ********* 2025-03-11 01:30:15.546145 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546157 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546170 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546182 | orchestrator | 2025-03-11 01:30:15.546195 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-11 01:30:15.546207 | orchestrator | Tuesday 11 March 2025 01:28:35 +0000 (0:00:00.867) 0:01:23.521 ********* 2025-03-11 01:30:15.546219 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546232 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546244 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546256 | orchestrator | 2025-03-11 01:30:15.546269 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-11 01:30:15.546281 | orchestrator | Tuesday 11 March 2025 01:28:36 +0000 (0:00:00.658) 0:01:24.179 ********* 2025-03-11 01:30:15.546293 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546305 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546318 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546330 | orchestrator | 2025-03-11 01:30:15.546343 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-11 01:30:15.546355 | orchestrator | Tuesday 11 March 2025 01:28:36 +0000 (0:00:00.479) 0:01:24.659 ********* 2025-03-11 01:30:15.546367 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546380 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546392 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546404 | orchestrator | 2025-03-11 01:30:15.546416 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-11 01:30:15.546429 | orchestrator | Tuesday 11 March 2025 01:28:37 +0000 (0:00:00.663) 0:01:25.322 ********* 2025-03-11 01:30:15.546441 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546454 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546466 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546478 | orchestrator | 2025-03-11 01:30:15.546491 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-11 01:30:15.546503 | orchestrator | Tuesday 11 March 2025 01:28:38 +0000 (0:00:00.605) 0:01:25.928 ********* 2025-03-11 01:30:15.546516 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546545 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546558 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546570 | orchestrator | 2025-03-11 01:30:15.546583 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-11 01:30:15.546595 | orchestrator | Tuesday 11 March 2025 01:28:38 +0000 (0:00:00.889) 0:01:26.818 ********* 2025-03-11 01:30:15.546608 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546620 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546632 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546645 | orchestrator | 2025-03-11 01:30:15.546657 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-11 01:30:15.546670 | orchestrator | Tuesday 11 March 2025 01:28:39 +0000 (0:00:00.837) 0:01:27.656 ********* 2025-03-11 01:30:15.546682 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546694 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546713 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546725 | orchestrator | 2025-03-11 01:30:15.546738 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-11 01:30:15.546751 | orchestrator | Tuesday 11 March 2025 01:28:40 +0000 (0:00:01.051) 0:01:28.707 ********* 2025-03-11 01:30:15.546763 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546776 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546788 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546800 | orchestrator | 2025-03-11 01:30:15.546818 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-11 01:30:15.546831 | orchestrator | Tuesday 11 March 2025 01:28:42 +0000 (0:00:01.262) 0:01:29.969 ********* 2025-03-11 01:30:15.546844 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546858 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546877 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546891 | orchestrator | 2025-03-11 01:30:15.546909 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-11 01:30:15.546922 | orchestrator | Tuesday 11 March 2025 01:28:43 +0000 (0:00:01.695) 0:01:31.664 ********* 2025-03-11 01:30:15.546935 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.546948 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.546961 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.546973 | orchestrator | 2025-03-11 01:30:15.546986 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:30:15.546998 | orchestrator | Tuesday 11 March 2025 01:28:45 +0000 (0:00:01.507) 0:01:33.172 ********* 2025-03-11 01:30:15.547011 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:30:15.547023 | orchestrator | 2025-03-11 01:30:15.547036 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-11 01:30:15.547048 | orchestrator | Tuesday 11 March 2025 01:28:47 +0000 (0:00:02.488) 0:01:35.660 ********* 2025-03-11 01:30:15.547061 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.547074 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.547086 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.547098 | orchestrator | 2025-03-11 01:30:15.547111 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-11 01:30:15.547123 | orchestrator | Tuesday 11 March 2025 01:28:48 +0000 (0:00:00.681) 0:01:36.342 ********* 2025-03-11 01:30:15.547136 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.547148 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.547160 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.547173 | orchestrator | 2025-03-11 01:30:15.547186 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-11 01:30:15.547198 | orchestrator | Tuesday 11 March 2025 01:28:49 +0000 (0:00:00.607) 0:01:36.949 ********* 2025-03-11 01:30:15.547210 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547223 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547235 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547248 | orchestrator | 2025-03-11 01:30:15.547260 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-11 01:30:15.547273 | orchestrator | Tuesday 11 March 2025 01:28:49 +0000 (0:00:00.613) 0:01:37.562 ********* 2025-03-11 01:30:15.547285 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547298 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547310 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547322 | orchestrator | 2025-03-11 01:30:15.547335 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-11 01:30:15.547347 | orchestrator | Tuesday 11 March 2025 01:28:50 +0000 (0:00:00.840) 0:01:38.402 ********* 2025-03-11 01:30:15.547360 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547372 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547385 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547397 | orchestrator | 2025-03-11 01:30:15.547415 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-11 01:30:15.547428 | orchestrator | Tuesday 11 March 2025 01:28:51 +0000 (0:00:00.453) 0:01:38.856 ********* 2025-03-11 01:30:15.547440 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547453 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547465 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547478 | orchestrator | 2025-03-11 01:30:15.547490 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-11 01:30:15.547502 | orchestrator | Tuesday 11 March 2025 01:28:52 +0000 (0:00:01.182) 0:01:40.038 ********* 2025-03-11 01:30:15.547515 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547542 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547555 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547567 | orchestrator | 2025-03-11 01:30:15.547580 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-11 01:30:15.547592 | orchestrator | Tuesday 11 March 2025 01:28:53 +0000 (0:00:01.158) 0:01:41.197 ********* 2025-03-11 01:30:15.547605 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.547617 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.547630 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.547642 | orchestrator | 2025-03-11 01:30:15.547655 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-11 01:30:15.547667 | orchestrator | Tuesday 11 March 2025 01:28:54 +0000 (0:00:01.141) 0:01:42.338 ********* 2025-03-11 01:30:15.547680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547815 | orchestrator | 2025-03-11 01:30:15.547828 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-11 01:30:15.547841 | orchestrator | Tuesday 11 March 2025 01:28:56 +0000 (0:00:02.053) 0:01:44.391 ********* 2025-03-11 01:30:15.547853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.547986 | orchestrator | 2025-03-11 01:30:15.547999 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-11 01:30:15.548011 | orchestrator | Tuesday 11 March 2025 01:29:04 +0000 (0:00:07.749) 0:01:52.141 ********* 2025-03-11 01:30:15.548024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.548160 | orchestrator | 2025-03-11 01:30:15.548173 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.548185 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:02.834) 0:01:54.976 ********* 2025-03-11 01:30:15.548198 | orchestrator | 2025-03-11 01:30:15.548211 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.548223 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:00.076) 0:01:55.052 ********* 2025-03-11 01:30:15.548236 | orchestrator | 2025-03-11 01:30:15.548249 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.548265 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:00.059) 0:01:55.112 ********* 2025-03-11 01:30:15.548278 | orchestrator | 2025-03-11 01:30:15.548291 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-11 01:30:15.548303 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:00.296) 0:01:55.408 ********* 2025-03-11 01:30:15.548316 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.548328 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.548341 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.548354 | orchestrator | 2025-03-11 01:30:15.548367 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-11 01:30:15.548379 | orchestrator | Tuesday 11 March 2025 01:29:16 +0000 (0:00:08.451) 0:02:03.859 ********* 2025-03-11 01:30:15.548392 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.548404 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.548417 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.548429 | orchestrator | 2025-03-11 01:30:15.548442 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-11 01:30:15.548455 | orchestrator | Tuesday 11 March 2025 01:29:19 +0000 (0:00:03.153) 0:02:07.013 ********* 2025-03-11 01:30:15.548467 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.548480 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.548492 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.548504 | orchestrator | 2025-03-11 01:30:15.548517 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-11 01:30:15.548571 | orchestrator | Tuesday 11 March 2025 01:29:27 +0000 (0:00:08.159) 0:02:15.173 ********* 2025-03-11 01:30:15.548585 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.548597 | orchestrator | 2025-03-11 01:30:15.548610 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-11 01:30:15.548632 | orchestrator | Tuesday 11 March 2025 01:29:27 +0000 (0:00:00.145) 0:02:15.318 ********* 2025-03-11 01:30:15.548645 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.548657 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.548670 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.548682 | orchestrator | 2025-03-11 01:30:15.548700 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-11 01:30:15.548713 | orchestrator | Tuesday 11 March 2025 01:29:28 +0000 (0:00:01.257) 0:02:16.576 ********* 2025-03-11 01:30:15.548726 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.548738 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.548751 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.548763 | orchestrator | 2025-03-11 01:30:15.548775 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-11 01:30:15.548788 | orchestrator | Tuesday 11 March 2025 01:29:29 +0000 (0:00:00.660) 0:02:17.236 ********* 2025-03-11 01:30:15.548800 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.548813 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.548825 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.548838 | orchestrator | 2025-03-11 01:30:15.548850 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-11 01:30:15.548862 | orchestrator | Tuesday 11 March 2025 01:29:30 +0000 (0:00:01.071) 0:02:18.308 ********* 2025-03-11 01:30:15.548875 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.548887 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.548899 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.548917 | orchestrator | 2025-03-11 01:30:15.548929 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-11 01:30:15.548942 | orchestrator | Tuesday 11 March 2025 01:29:31 +0000 (0:00:00.642) 0:02:18.950 ********* 2025-03-11 01:30:15.548954 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.548964 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.548975 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.548985 | orchestrator | 2025-03-11 01:30:15.548995 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-11 01:30:15.549005 | orchestrator | Tuesday 11 March 2025 01:29:32 +0000 (0:00:01.543) 0:02:20.494 ********* 2025-03-11 01:30:15.549015 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.549025 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.549035 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.549045 | orchestrator | 2025-03-11 01:30:15.549055 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-11 01:30:15.549065 | orchestrator | Tuesday 11 March 2025 01:29:33 +0000 (0:00:00.787) 0:02:21.281 ********* 2025-03-11 01:30:15.549075 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.549085 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.549095 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.549105 | orchestrator | 2025-03-11 01:30:15.549115 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-11 01:30:15.549126 | orchestrator | Tuesday 11 March 2025 01:29:33 +0000 (0:00:00.546) 0:02:21.828 ********* 2025-03-11 01:30:15.549136 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549147 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549157 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549173 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549194 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549209 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549220 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549240 | orchestrator | 2025-03-11 01:30:15.549251 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-11 01:30:15.549261 | orchestrator | Tuesday 11 March 2025 01:29:35 +0000 (0:00:01.817) 0:02:23.646 ********* 2025-03-11 01:30:15.549271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549297 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549347 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549379 | orchestrator | 2025-03-11 01:30:15.549389 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-11 01:30:15.549400 | orchestrator | Tuesday 11 March 2025 01:29:41 +0000 (0:00:05.221) 0:02:28.868 ********* 2025-03-11 01:30:15.549410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549436 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549447 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549461 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549471 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549511 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:30:15.549521 | orchestrator | 2025-03-11 01:30:15.549545 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.549556 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:03.282) 0:02:32.151 ********* 2025-03-11 01:30:15.549566 | orchestrator | 2025-03-11 01:30:15.549576 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.549590 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:00.229) 0:02:32.380 ********* 2025-03-11 01:30:15.549600 | orchestrator | 2025-03-11 01:30:15.549610 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:30:15.549620 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:00.064) 0:02:32.444 ********* 2025-03-11 01:30:15.549630 | orchestrator | 2025-03-11 01:30:15.549640 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-11 01:30:15.549656 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:00.071) 0:02:32.516 ********* 2025-03-11 01:30:15.549666 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.549676 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.549686 | orchestrator | 2025-03-11 01:30:15.549696 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-11 01:30:15.549706 | orchestrator | Tuesday 11 March 2025 01:29:51 +0000 (0:00:07.240) 0:02:39.757 ********* 2025-03-11 01:30:15.549716 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.549726 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.549736 | orchestrator | 2025-03-11 01:30:15.549746 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-11 01:30:15.549756 | orchestrator | Tuesday 11 March 2025 01:29:58 +0000 (0:00:06.512) 0:02:46.269 ********* 2025-03-11 01:30:15.549766 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:30:15.549776 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:30:15.549786 | orchestrator | 2025-03-11 01:30:15.549796 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-11 01:30:15.549806 | orchestrator | Tuesday 11 March 2025 01:30:05 +0000 (0:00:06.847) 0:02:53.116 ********* 2025-03-11 01:30:15.549816 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:30:15.549826 | orchestrator | 2025-03-11 01:30:15.549836 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-11 01:30:15.549846 | orchestrator | Tuesday 11 March 2025 01:30:05 +0000 (0:00:00.470) 0:02:53.587 ********* 2025-03-11 01:30:15.549856 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.549866 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.549876 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.549886 | orchestrator | 2025-03-11 01:30:15.549896 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-11 01:30:15.549906 | orchestrator | Tuesday 11 March 2025 01:30:06 +0000 (0:00:01.012) 0:02:54.600 ********* 2025-03-11 01:30:15.549916 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.549926 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.549937 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.549946 | orchestrator | 2025-03-11 01:30:15.549957 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-11 01:30:15.549967 | orchestrator | Tuesday 11 March 2025 01:30:07 +0000 (0:00:00.733) 0:02:55.333 ********* 2025-03-11 01:30:15.549977 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.549987 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.549997 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.550007 | orchestrator | 2025-03-11 01:30:15.550037 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-11 01:30:15.550049 | orchestrator | Tuesday 11 March 2025 01:30:08 +0000 (0:00:01.064) 0:02:56.397 ********* 2025-03-11 01:30:15.550060 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:30:15.550070 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:30:15.550080 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:30:15.550090 | orchestrator | 2025-03-11 01:30:15.550100 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-11 01:30:15.550110 | orchestrator | Tuesday 11 March 2025 01:30:09 +0000 (0:00:00.976) 0:02:57.374 ********* 2025-03-11 01:30:15.550120 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.550131 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.550141 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.550151 | orchestrator | 2025-03-11 01:30:15.550161 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-11 01:30:15.550171 | orchestrator | Tuesday 11 March 2025 01:30:10 +0000 (0:00:00.997) 0:02:58.372 ********* 2025-03-11 01:30:15.550181 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:30:15.550191 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:30:15.550201 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:30:15.550211 | orchestrator | 2025-03-11 01:30:15.550222 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:30:15.550232 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-11 01:30:15.550246 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-11 01:30:15.550261 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-11 01:30:18.589113 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:30:18.589226 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:30:18.589243 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:30:18.589260 | orchestrator | 2025-03-11 01:30:18.589275 | orchestrator | 2025-03-11 01:30:18.589290 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:30:18.589305 | orchestrator | Tuesday 11 March 2025 01:30:12 +0000 (0:00:01.791) 0:03:00.164 ********* 2025-03-11 01:30:18.589319 | orchestrator | =============================================================================== 2025-03-11 01:30:18.589333 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.63s 2025-03-11 01:30:18.589347 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 20.04s 2025-03-11 01:30:18.589361 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.69s 2025-03-11 01:30:18.589375 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.01s 2025-03-11 01:30:18.589389 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.67s 2025-03-11 01:30:18.589422 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 7.75s 2025-03-11 01:30:18.589437 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.22s 2025-03-11 01:30:18.589451 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.97s 2025-03-11 01:30:18.589465 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.85s 2025-03-11 01:30:18.589479 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.47s 2025-03-11 01:30:18.589493 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.28s 2025-03-11 01:30:18.589507 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.01s 2025-03-11 01:30:18.589521 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.97s 2025-03-11 01:30:18.589569 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.89s 2025-03-11 01:30:18.589584 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.83s 2025-03-11 01:30:18.589598 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.49s 2025-03-11 01:30:18.589611 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.05s 2025-03-11 01:30:18.589625 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.00s 2025-03-11 01:30:18.589639 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 2.00s 2025-03-11 01:30:18.589655 | orchestrator | ovn-db : Divide hosts by their OVN SB volume availability --------------- 1.97s 2025-03-11 01:30:18.589688 | orchestrator | 2025-03-11 01:30:18 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:21.643082 | orchestrator | 2025-03-11 01:30:18 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:21.643228 | orchestrator | 2025-03-11 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:21.643300 | orchestrator | 2025-03-11 01:30:21 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:21.643853 | orchestrator | 2025-03-11 01:30:21 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:21.644704 | orchestrator | 2025-03-11 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:24.703990 | orchestrator | 2025-03-11 01:30:24 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:27.759288 | orchestrator | 2025-03-11 01:30:24 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:27.759601 | orchestrator | 2025-03-11 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:27.759646 | orchestrator | 2025-03-11 01:30:27 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:30.812292 | orchestrator | 2025-03-11 01:30:27 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:30.812403 | orchestrator | 2025-03-11 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:30.812439 | orchestrator | 2025-03-11 01:30:30 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:30.812646 | orchestrator | 2025-03-11 01:30:30 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:30.812678 | orchestrator | 2025-03-11 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:33.893605 | orchestrator | 2025-03-11 01:30:33 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:33.893973 | orchestrator | 2025-03-11 01:30:33 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:36.936986 | orchestrator | 2025-03-11 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:36.937114 | orchestrator | 2025-03-11 01:30:36 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:36.939512 | orchestrator | 2025-03-11 01:30:36 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:36.940310 | orchestrator | 2025-03-11 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:39.983586 | orchestrator | 2025-03-11 01:30:39 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:43.054837 | orchestrator | 2025-03-11 01:30:39 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:43.054927 | orchestrator | 2025-03-11 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:43.054949 | orchestrator | 2025-03-11 01:30:43 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:46.130332 | orchestrator | 2025-03-11 01:30:43 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:46.130457 | orchestrator | 2025-03-11 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:46.130493 | orchestrator | 2025-03-11 01:30:46 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:46.130795 | orchestrator | 2025-03-11 01:30:46 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:49.181346 | orchestrator | 2025-03-11 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:49.181477 | orchestrator | 2025-03-11 01:30:49 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:49.184798 | orchestrator | 2025-03-11 01:30:49 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:52.261316 | orchestrator | 2025-03-11 01:30:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:52.261451 | orchestrator | 2025-03-11 01:30:52 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:55.314224 | orchestrator | 2025-03-11 01:30:52 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:55.314357 | orchestrator | 2025-03-11 01:30:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:55.314401 | orchestrator | 2025-03-11 01:30:55 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:58.362412 | orchestrator | 2025-03-11 01:30:55 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:30:58.362535 | orchestrator | 2025-03-11 01:30:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:30:58.362605 | orchestrator | 2025-03-11 01:30:58 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:30:58.363271 | orchestrator | 2025-03-11 01:30:58 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:01.404011 | orchestrator | 2025-03-11 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:01.404140 | orchestrator | 2025-03-11 01:31:01 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:01.405434 | orchestrator | 2025-03-11 01:31:01 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:04.474905 | orchestrator | 2025-03-11 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:04.475047 | orchestrator | 2025-03-11 01:31:04 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:04.475805 | orchestrator | 2025-03-11 01:31:04 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:07.525276 | orchestrator | 2025-03-11 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:07.525405 | orchestrator | 2025-03-11 01:31:07 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:07.525709 | orchestrator | 2025-03-11 01:31:07 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:10.583910 | orchestrator | 2025-03-11 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:10.584032 | orchestrator | 2025-03-11 01:31:10 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:10.585409 | orchestrator | 2025-03-11 01:31:10 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:13.631831 | orchestrator | 2025-03-11 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:13.631959 | orchestrator | 2025-03-11 01:31:13 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:13.632812 | orchestrator | 2025-03-11 01:31:13 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:13.633372 | orchestrator | 2025-03-11 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:16.681741 | orchestrator | 2025-03-11 01:31:16 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:16.683328 | orchestrator | 2025-03-11 01:31:16 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:16.683591 | orchestrator | 2025-03-11 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:19.748755 | orchestrator | 2025-03-11 01:31:19 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:19.749210 | orchestrator | 2025-03-11 01:31:19 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:19.750651 | orchestrator | 2025-03-11 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:22.793915 | orchestrator | 2025-03-11 01:31:22 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:22.794432 | orchestrator | 2025-03-11 01:31:22 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:25.845912 | orchestrator | 2025-03-11 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:25.846272 | orchestrator | 2025-03-11 01:31:25 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:28.896277 | orchestrator | 2025-03-11 01:31:25 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:28.896404 | orchestrator | 2025-03-11 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:28.896444 | orchestrator | 2025-03-11 01:31:28 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:28.897304 | orchestrator | 2025-03-11 01:31:28 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:31.952481 | orchestrator | 2025-03-11 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:31.952591 | orchestrator | 2025-03-11 01:31:31 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:34.995374 | orchestrator | 2025-03-11 01:31:31 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:34.995498 | orchestrator | 2025-03-11 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:34.995532 | orchestrator | 2025-03-11 01:31:34 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:34.999058 | orchestrator | 2025-03-11 01:31:34 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:38.051610 | orchestrator | 2025-03-11 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:38.051746 | orchestrator | 2025-03-11 01:31:38 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:38.052625 | orchestrator | 2025-03-11 01:31:38 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:41.100634 | orchestrator | 2025-03-11 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:41.100817 | orchestrator | 2025-03-11 01:31:41 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:41.113739 | orchestrator | 2025-03-11 01:31:41 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:44.158722 | orchestrator | 2025-03-11 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:44.158854 | orchestrator | 2025-03-11 01:31:44 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:47.226588 | orchestrator | 2025-03-11 01:31:44 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:47.226708 | orchestrator | 2025-03-11 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:47.226742 | orchestrator | 2025-03-11 01:31:47 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:47.232631 | orchestrator | 2025-03-11 01:31:47 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:50.284425 | orchestrator | 2025-03-11 01:31:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:50.284541 | orchestrator | 2025-03-11 01:31:50 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:53.328278 | orchestrator | 2025-03-11 01:31:50 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:53.328404 | orchestrator | 2025-03-11 01:31:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:53.328441 | orchestrator | 2025-03-11 01:31:53 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:53.332837 | orchestrator | 2025-03-11 01:31:53 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:56.382547 | orchestrator | 2025-03-11 01:31:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:56.382725 | orchestrator | 2025-03-11 01:31:56 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:56.384814 | orchestrator | 2025-03-11 01:31:56 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:59.433054 | orchestrator | 2025-03-11 01:31:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:31:59.433198 | orchestrator | 2025-03-11 01:31:59 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:31:59.434743 | orchestrator | 2025-03-11 01:31:59 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:31:59.437613 | orchestrator | 2025-03-11 01:31:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:02.483682 | orchestrator | 2025-03-11 01:32:02 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:02.486548 | orchestrator | 2025-03-11 01:32:02 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:05.525815 | orchestrator | 2025-03-11 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:05.525953 | orchestrator | 2025-03-11 01:32:05 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:05.527330 | orchestrator | 2025-03-11 01:32:05 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:05.527482 | orchestrator | 2025-03-11 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:08.576078 | orchestrator | 2025-03-11 01:32:08 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:11.642968 | orchestrator | 2025-03-11 01:32:08 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:11.643095 | orchestrator | 2025-03-11 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:11.643133 | orchestrator | 2025-03-11 01:32:11 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:14.684619 | orchestrator | 2025-03-11 01:32:11 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:14.684731 | orchestrator | 2025-03-11 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:14.684766 | orchestrator | 2025-03-11 01:32:14 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:17.744084 | orchestrator | 2025-03-11 01:32:14 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:17.744205 | orchestrator | 2025-03-11 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:17.744241 | orchestrator | 2025-03-11 01:32:17 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:17.744657 | orchestrator | 2025-03-11 01:32:17 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:20.799176 | orchestrator | 2025-03-11 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:20.799335 | orchestrator | 2025-03-11 01:32:20 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:23.846874 | orchestrator | 2025-03-11 01:32:20 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:23.846985 | orchestrator | 2025-03-11 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:23.847019 | orchestrator | 2025-03-11 01:32:23 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:23.848138 | orchestrator | 2025-03-11 01:32:23 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:26.905569 | orchestrator | 2025-03-11 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:26.905729 | orchestrator | 2025-03-11 01:32:26 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:26.910822 | orchestrator | 2025-03-11 01:32:26 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:29.948464 | orchestrator | 2025-03-11 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:29.948640 | orchestrator | 2025-03-11 01:32:29 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:29.949709 | orchestrator | 2025-03-11 01:32:29 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:29.949746 | orchestrator | 2025-03-11 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:33.014639 | orchestrator | 2025-03-11 01:32:33 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:33.016250 | orchestrator | 2025-03-11 01:32:33 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:33.016462 | orchestrator | 2025-03-11 01:32:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:36.071722 | orchestrator | 2025-03-11 01:32:36 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:39.118685 | orchestrator | 2025-03-11 01:32:36 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:39.118791 | orchestrator | 2025-03-11 01:32:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:39.118823 | orchestrator | 2025-03-11 01:32:39 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:39.119250 | orchestrator | 2025-03-11 01:32:39 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:42.171219 | orchestrator | 2025-03-11 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:42.171354 | orchestrator | 2025-03-11 01:32:42 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:42.172534 | orchestrator | 2025-03-11 01:32:42 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:45.227267 | orchestrator | 2025-03-11 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:45.227380 | orchestrator | 2025-03-11 01:32:45 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:45.229690 | orchestrator | 2025-03-11 01:32:45 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:48.271734 | orchestrator | 2025-03-11 01:32:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:48.271869 | orchestrator | 2025-03-11 01:32:48 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:48.273258 | orchestrator | 2025-03-11 01:32:48 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:48.273393 | orchestrator | 2025-03-11 01:32:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:51.320656 | orchestrator | 2025-03-11 01:32:51 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:51.321463 | orchestrator | 2025-03-11 01:32:51 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:54.379983 | orchestrator | 2025-03-11 01:32:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:54.380088 | orchestrator | 2025-03-11 01:32:54 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:32:57.439038 | orchestrator | 2025-03-11 01:32:54 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:32:57.439154 | orchestrator | 2025-03-11 01:32:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:32:57.439192 | orchestrator | 2025-03-11 01:32:57 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:00.485537 | orchestrator | 2025-03-11 01:32:57 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:00.485678 | orchestrator | 2025-03-11 01:32:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:00.485709 | orchestrator | 2025-03-11 01:33:00 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:00.486618 | orchestrator | 2025-03-11 01:33:00 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:00.487610 | orchestrator | 2025-03-11 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:03.548892 | orchestrator | 2025-03-11 01:33:03 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:06.595368 | orchestrator | 2025-03-11 01:33:03 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:06.595485 | orchestrator | 2025-03-11 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:06.595521 | orchestrator | 2025-03-11 01:33:06 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:06.598998 | orchestrator | 2025-03-11 01:33:06 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:09.671348 | orchestrator | 2025-03-11 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:09.672265 | orchestrator | 2025-03-11 01:33:09 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:12.716093 | orchestrator | 2025-03-11 01:33:09 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:12.716208 | orchestrator | 2025-03-11 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:12.716243 | orchestrator | 2025-03-11 01:33:12 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:12.716999 | orchestrator | 2025-03-11 01:33:12 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:15.769578 | orchestrator | 2025-03-11 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:15.769712 | orchestrator | 2025-03-11 01:33:15 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:15.770794 | orchestrator | 2025-03-11 01:33:15 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:18.817298 | orchestrator | 2025-03-11 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:18.817435 | orchestrator | 2025-03-11 01:33:18 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:18.817845 | orchestrator | 2025-03-11 01:33:18 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:21.874713 | orchestrator | 2025-03-11 01:33:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:21.874972 | orchestrator | 2025-03-11 01:33:21 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:21.875306 | orchestrator | 2025-03-11 01:33:21 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:21.875341 | orchestrator | 2025-03-11 01:33:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:24.919004 | orchestrator | 2025-03-11 01:33:24 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:24.919539 | orchestrator | 2025-03-11 01:33:24 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:27.971500 | orchestrator | 2025-03-11 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:27.971683 | orchestrator | 2025-03-11 01:33:27 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:27.975322 | orchestrator | 2025-03-11 01:33:27 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:31.021131 | orchestrator | 2025-03-11 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:31.021236 | orchestrator | 2025-03-11 01:33:31 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:31.022664 | orchestrator | 2025-03-11 01:33:31 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:34.064958 | orchestrator | 2025-03-11 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:34.065034 | orchestrator | 2025-03-11 01:33:34 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:34.068585 | orchestrator | 2025-03-11 01:33:34 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:37.121407 | orchestrator | 2025-03-11 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:37.121490 | orchestrator | 2025-03-11 01:33:37 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:37.122785 | orchestrator | 2025-03-11 01:33:37 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:40.170759 | orchestrator | 2025-03-11 01:33:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:40.170884 | orchestrator | 2025-03-11 01:33:40 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:40.172871 | orchestrator | 2025-03-11 01:33:40 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:43.228939 | orchestrator | 2025-03-11 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:43.229070 | orchestrator | 2025-03-11 01:33:43 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:46.300879 | orchestrator | 2025-03-11 01:33:43 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:46.301001 | orchestrator | 2025-03-11 01:33:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:46.301038 | orchestrator | 2025-03-11 01:33:46 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:46.304532 | orchestrator | 2025-03-11 01:33:46 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:46.306171 | orchestrator | 2025-03-11 01:33:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:49.359831 | orchestrator | 2025-03-11 01:33:49 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state STARTED 2025-03-11 01:33:52.411805 | orchestrator | 2025-03-11 01:33:49 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:52.411889 | orchestrator | 2025-03-11 01:33:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:52.411908 | orchestrator | 2025-03-11 01:33:52 | INFO  | Task e42e2560-b007-4b0f-99b7-ecb5a9761dd6 is in state SUCCESS 2025-03-11 01:33:52.415721 | orchestrator | 2025-03-11 01:33:52.415887 | orchestrator | 2025-03-11 01:33:52.415907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:33:52.415921 | orchestrator | 2025-03-11 01:33:52.415934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:33:52.415947 | orchestrator | Tuesday 11 March 2025 01:25:19 +0000 (0:00:00.419) 0:00:00.419 ********* 2025-03-11 01:33:52.415959 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.418335 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.418403 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.418418 | orchestrator | 2025-03-11 01:33:52.418434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:33:52.418448 | orchestrator | Tuesday 11 March 2025 01:25:20 +0000 (0:00:01.120) 0:00:01.539 ********* 2025-03-11 01:33:52.418462 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-11 01:33:52.418475 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-11 01:33:52.418488 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-11 01:33:52.418501 | orchestrator | 2025-03-11 01:33:52.418513 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-11 01:33:52.418526 | orchestrator | 2025-03-11 01:33:52.418539 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-11 01:33:52.418551 | orchestrator | Tuesday 11 March 2025 01:25:21 +0000 (0:00:00.946) 0:00:02.486 ********* 2025-03-11 01:33:52.418565 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.418579 | orchestrator | 2025-03-11 01:33:52.418591 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-11 01:33:52.418604 | orchestrator | Tuesday 11 March 2025 01:25:23 +0000 (0:00:01.352) 0:00:03.839 ********* 2025-03-11 01:33:52.418639 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.418653 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.418665 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.418678 | orchestrator | 2025-03-11 01:33:52.418691 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-11 01:33:52.418703 | orchestrator | Tuesday 11 March 2025 01:25:24 +0000 (0:00:01.420) 0:00:05.259 ********* 2025-03-11 01:33:52.418716 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.418728 | orchestrator | 2025-03-11 01:33:52.418741 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-11 01:33:52.418753 | orchestrator | Tuesday 11 March 2025 01:25:26 +0000 (0:00:02.032) 0:00:07.292 ********* 2025-03-11 01:33:52.418766 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.418778 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.418791 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.418803 | orchestrator | 2025-03-11 01:33:52.418816 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-11 01:33:52.418828 | orchestrator | Tuesday 11 March 2025 01:25:28 +0000 (0:00:01.701) 0:00:08.993 ********* 2025-03-11 01:33:52.418841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418942 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:33:52.418972 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:33:52.418987 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:33:52.419000 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:33:52.419013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:33:52.419025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:33:52.419037 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:33:52.419050 | orchestrator | 2025-03-11 01:33:52.419062 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-11 01:33:52.419075 | orchestrator | Tuesday 11 March 2025 01:25:32 +0000 (0:00:03.978) 0:00:12.972 ********* 2025-03-11 01:33:52.419094 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-11 01:33:52.419107 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-11 01:33:52.419120 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-11 01:33:52.419132 | orchestrator | 2025-03-11 01:33:52.419145 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-11 01:33:52.419158 | orchestrator | Tuesday 11 March 2025 01:25:34 +0000 (0:00:02.444) 0:00:15.417 ********* 2025-03-11 01:33:52.419170 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-11 01:33:52.419183 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-11 01:33:52.419195 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-11 01:33:52.419208 | orchestrator | 2025-03-11 01:33:52.419220 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-11 01:33:52.419233 | orchestrator | Tuesday 11 March 2025 01:25:36 +0000 (0:00:02.075) 0:00:17.492 ********* 2025-03-11 01:33:52.419245 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-11 01:33:52.419258 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.419298 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-11 01:33:52.419312 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.419325 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-11 01:33:52.419338 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.419350 | orchestrator | 2025-03-11 01:33:52.419363 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-11 01:33:52.419376 | orchestrator | Tuesday 11 March 2025 01:25:37 +0000 (0:00:00.783) 0:00:18.275 ********* 2025-03-11 01:33:52.419391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.419459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.419483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.419518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.419532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.419554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.419569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.419584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.419606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.419648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.419662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.419675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.419688 | orchestrator | 2025-03-11 01:33:52.419701 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-11 01:33:52.419714 | orchestrator | Tuesday 11 March 2025 01:25:40 +0000 (0:00:03.297) 0:00:21.573 ********* 2025-03-11 01:33:52.419726 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:33:52.419739 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:33:52.419752 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:33:52.419764 | orchestrator | 2025-03-11 01:33:52.419777 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-11 01:33:52.419795 | orchestrator | Tuesday 11 March 2025 01:25:47 +0000 (0:00:06.153) 0:00:27.731 ********* 2025-03-11 01:33:52.419808 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-03-11 01:33:52.419821 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-03-11 01:33:52.419833 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-03-11 01:33:52.419846 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-03-11 01:33:52.419858 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.419871 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-03-11 01:33:52.419883 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.419902 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-03-11 01:33:52.419922 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.419934 | orchestrator | 2025-03-11 01:33:52.419947 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-11 01:33:52.419959 | orchestrator | Tuesday 11 March 2025 01:25:51 +0000 (0:00:04.689) 0:00:32.421 ********* 2025-03-11 01:33:52.419972 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:33:52.419984 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:33:52.419997 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:33:52.420009 | orchestrator | 2025-03-11 01:33:52.420021 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-11 01:33:52.420034 | orchestrator | Tuesday 11 March 2025 01:25:56 +0000 (0:00:04.488) 0:00:36.909 ********* 2025-03-11 01:33:52.420046 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.420058 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.420071 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.420083 | orchestrator | 2025-03-11 01:33:52.420095 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-11 01:33:52.420108 | orchestrator | Tuesday 11 March 2025 01:25:58 +0000 (0:00:01.879) 0:00:38.789 ********* 2025-03-11 01:33:52.420121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.420134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.420147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.420161 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420184 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420203 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420312 | orchestrator | 2025-03-11 01:33:52.420325 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-11 01:33:52.420338 | orchestrator | Tuesday 11 March 2025 01:26:04 +0000 (0:00:05.931) 0:00:44.720 ********* 2025-03-11 01:33:52.420350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.420514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420527 | orchestrator | 2025-03-11 01:33:52.420545 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-11 01:33:52.420558 | orchestrator | Tuesday 11 March 2025 01:26:09 +0000 (0:00:05.483) 0:00:50.204 ********* 2025-03-11 01:33:52.420572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.420666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.420686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.420700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.420726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.420763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.420782 | orchestrator | 2025-03-11 01:33:52.420795 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-11 01:33:52.420807 | orchestrator | Tuesday 11 March 2025 01:26:12 +0000 (0:00:03.087) 0:00:53.292 ********* 2025-03-11 01:33:52.420820 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:33:52.420833 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:33:52.420845 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:33:52.420858 | orchestrator | 2025-03-11 01:33:52.420870 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-11 01:33:52.420888 | orchestrator | Tuesday 11 March 2025 01:26:16 +0000 (0:00:03.673) 0:00:56.965 ********* 2025-03-11 01:33:52.420901 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:33:52.420913 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.420926 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:33:52.420938 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.420951 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:33:52.420963 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.420976 | orchestrator | 2025-03-11 01:33:52.420988 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-11 01:33:52.421000 | orchestrator | Tuesday 11 March 2025 01:26:20 +0000 (0:00:03.762) 0:01:00.728 ********* 2025-03-11 01:33:52.421012 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.421025 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.421037 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.421049 | orchestrator | 2025-03-11 01:33:52.421062 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-11 01:33:52.421074 | orchestrator | Tuesday 11 March 2025 01:26:24 +0000 (0:00:04.088) 0:01:04.817 ********* 2025-03-11 01:33:52.421087 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:33:52.421101 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:33:52.421113 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:33:52.421125 | orchestrator | 2025-03-11 01:33:52.421138 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-11 01:33:52.421150 | orchestrator | Tuesday 11 March 2025 01:26:32 +0000 (0:00:07.972) 0:01:12.789 ********* 2025-03-11 01:33:52.421162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:33:52.421182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:33:52.421195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:33:52.421208 | orchestrator | 2025-03-11 01:33:52.421231 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-11 01:33:52.421244 | orchestrator | Tuesday 11 March 2025 01:26:39 +0000 (0:00:07.174) 0:01:19.964 ********* 2025-03-11 01:33:52.421256 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-11 01:33:52.421269 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-11 01:33:52.421281 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-11 01:33:52.421293 | orchestrator | 2025-03-11 01:33:52.421306 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-11 01:33:52.421318 | orchestrator | Tuesday 11 March 2025 01:26:43 +0000 (0:00:04.541) 0:01:24.506 ********* 2025-03-11 01:33:52.421331 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-11 01:33:52.421343 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-11 01:33:52.421356 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-11 01:33:52.421368 | orchestrator | 2025-03-11 01:33:52.421380 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-11 01:33:52.421392 | orchestrator | Tuesday 11 March 2025 01:26:46 +0000 (0:00:02.396) 0:01:26.903 ********* 2025-03-11 01:33:52.421404 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.421417 | orchestrator | 2025-03-11 01:33:52.421429 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-11 01:33:52.421441 | orchestrator | Tuesday 11 March 2025 01:26:48 +0000 (0:00:02.437) 0:01:29.341 ********* 2025-03-11 01:33:52.421454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.421479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.421494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.421507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.421531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.421544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.421557 | orchestrator | 2025-03-11 01:33:52.421569 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-11 01:33:52.421582 | orchestrator | Tuesday 11 March 2025 01:26:52 +0000 (0:00:03.845) 0:01:33.186 ********* 2025-03-11 01:33:52.421594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421640 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.421660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421687 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.421699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421737 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.421749 | orchestrator | 2025-03-11 01:33:52.421762 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-11 01:33:52.421774 | orchestrator | Tuesday 11 March 2025 01:26:54 +0000 (0:00:01.760) 0:01:34.946 ********* 2025-03-11 01:33:52.421787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421812 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.421825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421858 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.421871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:33:52.421890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.421903 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.421915 | orchestrator | 2025-03-11 01:33:52.421928 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-11 01:33:52.421940 | orchestrator | Tuesday 11 March 2025 01:26:59 +0000 (0:00:05.519) 0:01:40.466 ********* 2025-03-11 01:33:52.421953 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:33:52.421966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:33:52.421979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:33:52.421992 | orchestrator | 2025-03-11 01:33:52.422005 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-11 01:33:52.422056 | orchestrator | Tuesday 11 March 2025 01:27:03 +0000 (0:00:03.356) 0:01:43.822 ********* 2025-03-11 01:33:52.422071 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:33:52.422084 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.422097 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:33:52.422109 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.422122 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:33:52.422134 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.422147 | orchestrator | 2025-03-11 01:33:52.422159 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-11 01:33:52.422172 | orchestrator | Tuesday 11 March 2025 01:27:05 +0000 (0:00:01.910) 0:01:45.733 ********* 2025-03-11 01:33:52.422184 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:33:52.422197 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:33:52.422209 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:33:52.422221 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:33:52.422234 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.422247 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:33:52.422259 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.422272 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:33:52.422284 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.422297 | orchestrator | 2025-03-11 01:33:52.422309 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-11 01:33:52.422322 | orchestrator | Tuesday 11 March 2025 01:27:09 +0000 (0:00:04.054) 0:01:49.788 ********* 2025-03-11 01:33:52.422353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.422368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.422382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.422395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.422408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:33:52.422421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:33:52.422445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.422467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.422481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.422494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.422507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:33:52.422520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2', '__omit_place_holder__fe0f400ace1f271d8c3b895ceb46cdc04e6308a2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:33:52.422533 | orchestrator | 2025-03-11 01:33:52.422546 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-11 01:33:52.422559 | orchestrator | Tuesday 11 March 2025 01:27:12 +0000 (0:00:03.263) 0:01:53.052 ********* 2025-03-11 01:33:52.422572 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.422595 | orchestrator | 2025-03-11 01:33:52.422608 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-11 01:33:52.422644 | orchestrator | Tuesday 11 March 2025 01:27:13 +0000 (0:00:01.060) 0:01:54.112 ********* 2025-03-11 01:33:52.422665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:33:52.422680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.422695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:33:52.422768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.422789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:33:52.422837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.422850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.422892 | orchestrator | 2025-03-11 01:33:52.422905 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-11 01:33:52.422918 | orchestrator | Tuesday 11 March 2025 01:27:20 +0000 (0:00:07.218) 0:02:01.330 ********* 2025-03-11 01:33:52.422937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:33:52.422950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.422964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:33:52.422977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.422990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423043 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.423065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423078 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.423091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:33:52.423104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.423117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423157 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.423170 | orchestrator | 2025-03-11 01:33:52.423183 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-11 01:33:52.423196 | orchestrator | Tuesday 11 March 2025 01:27:22 +0000 (0:00:01.784) 0:02:03.115 ********* 2025-03-11 01:33:52.423209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423234 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.423247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423273 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.423293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:33:52.423319 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.423332 | orchestrator | 2025-03-11 01:33:52.423344 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-11 01:33:52.423356 | orchestrator | Tuesday 11 March 2025 01:27:24 +0000 (0:00:01.592) 0:02:04.708 ********* 2025-03-11 01:33:52.423369 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.423382 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.423394 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.423407 | orchestrator | 2025-03-11 01:33:52.423419 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-11 01:33:52.423432 | orchestrator | Tuesday 11 March 2025 01:27:24 +0000 (0:00:00.552) 0:02:05.260 ********* 2025-03-11 01:33:52.423444 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.423456 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.423469 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.423481 | orchestrator | 2025-03-11 01:33:52.423498 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-11 01:33:52.423511 | orchestrator | Tuesday 11 March 2025 01:27:27 +0000 (0:00:02.664) 0:02:07.924 ********* 2025-03-11 01:33:52.423523 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.423535 | orchestrator | 2025-03-11 01:33:52.423548 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-11 01:33:52.423560 | orchestrator | Tuesday 11 March 2025 01:27:28 +0000 (0:00:01.206) 0:02:09.131 ********* 2025-03-11 01:33:52.423579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.423594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.423671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.423691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423759 | orchestrator | 2025-03-11 01:33:52.423772 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-11 01:33:52.423785 | orchestrator | Tuesday 11 March 2025 01:27:34 +0000 (0:00:06.155) 0:02:15.287 ********* 2025-03-11 01:33:52.423797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.423817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423843 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.423856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.423888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423921 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.423934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.423948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.423974 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.423986 | orchestrator | 2025-03-11 01:33:52.423999 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-11 01:33:52.424011 | orchestrator | Tuesday 11 March 2025 01:27:37 +0000 (0:00:02.877) 0:02:18.165 ********* 2025-03-11 01:33:52.424024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424049 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424105 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:33:52.424149 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424162 | orchestrator | 2025-03-11 01:33:52.424174 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-11 01:33:52.424187 | orchestrator | Tuesday 11 March 2025 01:27:38 +0000 (0:00:01.151) 0:02:19.317 ********* 2025-03-11 01:33:52.424199 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424212 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424224 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424236 | orchestrator | 2025-03-11 01:33:52.424249 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-11 01:33:52.424261 | orchestrator | Tuesday 11 March 2025 01:27:39 +0000 (0:00:00.591) 0:02:19.908 ********* 2025-03-11 01:33:52.424274 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424286 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424299 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424311 | orchestrator | 2025-03-11 01:33:52.424324 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-11 01:33:52.424336 | orchestrator | Tuesday 11 March 2025 01:27:40 +0000 (0:00:01.613) 0:02:21.521 ********* 2025-03-11 01:33:52.424349 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424361 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424379 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424391 | orchestrator | 2025-03-11 01:33:52.424404 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-11 01:33:52.424416 | orchestrator | Tuesday 11 March 2025 01:27:41 +0000 (0:00:00.479) 0:02:22.001 ********* 2025-03-11 01:33:52.424429 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.424441 | orchestrator | 2025-03-11 01:33:52.424453 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-11 01:33:52.424466 | orchestrator | Tuesday 11 March 2025 01:27:42 +0000 (0:00:01.089) 0:02:23.090 ********* 2025-03-11 01:33:52.424487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:33:52.424501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:33:52.424529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:33:52.424543 | orchestrator | 2025-03-11 01:33:52.424555 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-11 01:33:52.424568 | orchestrator | Tuesday 11 March 2025 01:27:46 +0000 (0:00:03.577) 0:02:26.668 ********* 2025-03-11 01:33:52.424580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:33:52.424593 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:33:52.424681 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:33:52.424707 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424719 | orchestrator | 2025-03-11 01:33:52.424732 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-11 01:33:52.424752 | orchestrator | Tuesday 11 March 2025 01:27:48 +0000 (0:00:02.217) 0:02:28.886 ********* 2025-03-11 01:33:52.424766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424802 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424842 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:33:52.424880 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424893 | orchestrator | 2025-03-11 01:33:52.424906 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-11 01:33:52.424918 | orchestrator | Tuesday 11 March 2025 01:27:51 +0000 (0:00:02.930) 0:02:31.816 ********* 2025-03-11 01:33:52.424931 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.424943 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.424956 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.424969 | orchestrator | 2025-03-11 01:33:52.424981 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-11 01:33:52.424993 | orchestrator | Tuesday 11 March 2025 01:27:51 +0000 (0:00:00.583) 0:02:32.399 ********* 2025-03-11 01:33:52.425006 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.425019 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.425031 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.425044 | orchestrator | 2025-03-11 01:33:52.425057 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-11 01:33:52.425077 | orchestrator | Tuesday 11 March 2025 01:27:53 +0000 (0:00:01.639) 0:02:34.039 ********* 2025-03-11 01:33:52.425090 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.425102 | orchestrator | 2025-03-11 01:33:52.425115 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-11 01:33:52.425128 | orchestrator | Tuesday 11 March 2025 01:27:54 +0000 (0:00:00.968) 0:02:35.008 ********* 2025-03-11 01:33:52.425141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.425165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.425236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.425278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425356 | orchestrator | 2025-03-11 01:33:52.425369 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-11 01:33:52.425405 | orchestrator | Tuesday 11 March 2025 01:28:03 +0000 (0:00:09.124) 0:02:44.132 ********* 2025-03-11 01:33:52.425419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.425431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.425491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425524 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.425544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425577 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.425590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.425603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.425679 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.425691 | orchestrator | 2025-03-11 01:33:52.425704 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-11 01:33:52.425716 | orchestrator | Tuesday 11 March 2025 01:28:06 +0000 (0:00:02.693) 0:02:46.826 ********* 2025-03-11 01:33:52.425729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425762 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.425774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425800 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.425813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:33:52.425838 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.425850 | orchestrator | 2025-03-11 01:33:52.425863 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-11 01:33:52.425875 | orchestrator | Tuesday 11 March 2025 01:28:09 +0000 (0:00:03.471) 0:02:50.297 ********* 2025-03-11 01:33:52.425888 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.425900 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.425913 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.425925 | orchestrator | 2025-03-11 01:33:52.425938 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-11 01:33:52.425950 | orchestrator | Tuesday 11 March 2025 01:28:10 +0000 (0:00:00.741) 0:02:51.039 ********* 2025-03-11 01:33:52.425962 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.425975 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.425988 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.426001 | orchestrator | 2025-03-11 01:33:52.426039 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-11 01:33:52.426056 | orchestrator | Tuesday 11 March 2025 01:28:12 +0000 (0:00:02.180) 0:02:53.219 ********* 2025-03-11 01:33:52.426069 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.426083 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.426096 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.426108 | orchestrator | 2025-03-11 01:33:52.426121 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-11 01:33:52.426134 | orchestrator | Tuesday 11 March 2025 01:28:13 +0000 (0:00:00.456) 0:02:53.676 ********* 2025-03-11 01:33:52.426146 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.426158 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.426171 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.426183 | orchestrator | 2025-03-11 01:33:52.426196 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-11 01:33:52.426223 | orchestrator | Tuesday 11 March 2025 01:28:13 +0000 (0:00:00.634) 0:02:54.310 ********* 2025-03-11 01:33:52.426237 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.426250 | orchestrator | 2025-03-11 01:33:52.426263 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-11 01:33:52.426275 | orchestrator | Tuesday 11 March 2025 01:28:15 +0000 (0:00:01.361) 0:02:55.671 ********* 2025-03-11 01:33:52.426288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:33:52.426308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.426333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:33:52.426435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.426448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:33:52.426551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.426564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426711 | orchestrator | 2025-03-11 01:33:52.426724 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-11 01:33:52.426737 | orchestrator | Tuesday 11 March 2025 01:28:25 +0000 (0:00:10.693) 0:03:06.365 ********* 2025-03-11 01:33:52.426749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:33:52.426763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.426776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426869 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.426881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:33:52.426894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.426930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.426992 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:33:52.427029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:33:52.427041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.427052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.427063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.427073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.427083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.427099 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427110 | orchestrator | 2025-03-11 01:33:52.427120 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-11 01:33:52.427130 | orchestrator | Tuesday 11 March 2025 01:28:29 +0000 (0:00:03.505) 0:03:09.871 ********* 2025-03-11 01:33:52.427140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427161 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427198 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:33:52.427230 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427240 | orchestrator | 2025-03-11 01:33:52.427250 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-11 01:33:52.427261 | orchestrator | Tuesday 11 March 2025 01:28:33 +0000 (0:00:04.021) 0:03:13.892 ********* 2025-03-11 01:33:52.427271 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427281 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427291 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427301 | orchestrator | 2025-03-11 01:33:52.427312 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-11 01:33:52.427322 | orchestrator | Tuesday 11 March 2025 01:28:34 +0000 (0:00:00.912) 0:03:14.805 ********* 2025-03-11 01:33:52.427332 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427342 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427353 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427363 | orchestrator | 2025-03-11 01:33:52.427373 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-11 01:33:52.427383 | orchestrator | Tuesday 11 March 2025 01:28:36 +0000 (0:00:02.060) 0:03:16.866 ********* 2025-03-11 01:33:52.427393 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427403 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427414 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427424 | orchestrator | 2025-03-11 01:33:52.427434 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-11 01:33:52.427445 | orchestrator | Tuesday 11 March 2025 01:28:37 +0000 (0:00:00.816) 0:03:17.683 ********* 2025-03-11 01:33:52.427455 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.427465 | orchestrator | 2025-03-11 01:33:52.427475 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-11 01:33:52.427485 | orchestrator | Tuesday 11 March 2025 01:28:38 +0000 (0:00:01.205) 0:03:18.888 ********* 2025-03-11 01:33:52.427509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:33:52.427534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:33:52.427577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:33:52.427632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427652 | orchestrator | 2025-03-11 01:33:52.427663 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-11 01:33:52.427673 | orchestrator | Tuesday 11 March 2025 01:28:49 +0000 (0:00:11.065) 0:03:29.954 ********* 2025-03-11 01:33:52.427683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:33:52.427709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427728 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:33:52.427756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427773 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.427791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:33:52.427808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.427826 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427837 | orchestrator | 2025-03-11 01:33:52.427847 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-11 01:33:52.427858 | orchestrator | Tuesday 11 March 2025 01:28:56 +0000 (0:00:07.649) 0:03:37.604 ********* 2025-03-11 01:33:52.427894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427918 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.427928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427956 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.427967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:33:52.427989 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428006 | orchestrator | 2025-03-11 01:33:52.428017 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-11 01:33:52.428027 | orchestrator | Tuesday 11 March 2025 01:29:05 +0000 (0:00:08.171) 0:03:45.775 ********* 2025-03-11 01:33:52.428037 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428048 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428059 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428069 | orchestrator | 2025-03-11 01:33:52.428079 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-11 01:33:52.428090 | orchestrator | Tuesday 11 March 2025 01:29:05 +0000 (0:00:00.376) 0:03:46.152 ********* 2025-03-11 01:33:52.428100 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428110 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428120 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428130 | orchestrator | 2025-03-11 01:33:52.428141 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-11 01:33:52.428151 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:01.676) 0:03:47.828 ********* 2025-03-11 01:33:52.428161 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428171 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428181 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428192 | orchestrator | 2025-03-11 01:33:52.428202 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-11 01:33:52.428212 | orchestrator | Tuesday 11 March 2025 01:29:07 +0000 (0:00:00.641) 0:03:48.470 ********* 2025-03-11 01:33:52.428223 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.428233 | orchestrator | 2025-03-11 01:33:52.428243 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-11 01:33:52.428254 | orchestrator | Tuesday 11 March 2025 01:29:09 +0000 (0:00:01.501) 0:03:49.971 ********* 2025-03-11 01:33:52.428269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:33:52.428287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:33:52.428298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:33:52.428309 | orchestrator | 2025-03-11 01:33:52.428319 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-11 01:33:52.428329 | orchestrator | Tuesday 11 March 2025 01:29:13 +0000 (0:00:04.435) 0:03:54.407 ********* 2025-03-11 01:33:52.428340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:33:52.428350 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:33:52.428372 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:33:52.428639 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428659 | orchestrator | 2025-03-11 01:33:52.428670 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-11 01:33:52.428680 | orchestrator | Tuesday 11 March 2025 01:29:14 +0000 (0:00:00.537) 0:03:54.944 ********* 2025-03-11 01:33:52.428691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428718 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428749 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:33:52.428781 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428791 | orchestrator | 2025-03-11 01:33:52.428801 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-11 01:33:52.428811 | orchestrator | Tuesday 11 March 2025 01:29:15 +0000 (0:00:01.165) 0:03:56.110 ********* 2025-03-11 01:33:52.428821 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428831 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428841 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428852 | orchestrator | 2025-03-11 01:33:52.428862 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-11 01:33:52.428872 | orchestrator | Tuesday 11 March 2025 01:29:15 +0000 (0:00:00.338) 0:03:56.448 ********* 2025-03-11 01:33:52.428882 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.428892 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.428902 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.428912 | orchestrator | 2025-03-11 01:33:52.428922 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-11 01:33:52.428932 | orchestrator | Tuesday 11 March 2025 01:29:17 +0000 (0:00:01.648) 0:03:58.097 ********* 2025-03-11 01:33:52.428942 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.428953 | orchestrator | 2025-03-11 01:33:52.428963 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-03-11 01:33:52.428973 | orchestrator | Tuesday 11 March 2025 01:29:18 +0000 (0:00:01.345) 0:03:59.442 ********* 2025-03-11 01:33:52.428983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.429127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429138 | orchestrator | 2025-03-11 01:33:52.429148 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-03-11 01:33:52.429158 | orchestrator | Tuesday 11 March 2025 01:29:26 +0000 (0:00:07.771) 0:04:07.214 ********* 2025-03-11 01:33:52.429169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429212 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.429231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429267 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.429278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.429320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.429332 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.429343 | orchestrator | 2025-03-11 01:33:52.429354 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-03-11 01:33:52.429366 | orchestrator | Tuesday 11 March 2025 01:29:27 +0000 (0:00:01.381) 0:04:08.596 ********* 2025-03-11 01:33:52.429377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429435 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.429446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429493 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.429505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:33:52.429551 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.429561 | orchestrator | 2025-03-11 01:33:52.429571 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-03-11 01:33:52.429581 | orchestrator | Tuesday 11 March 2025 01:29:29 +0000 (0:00:01.696) 0:04:10.292 ********* 2025-03-11 01:33:52.429591 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.429601 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.429654 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.429667 | orchestrator | 2025-03-11 01:33:52.429682 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-03-11 01:33:52.429697 | orchestrator | Tuesday 11 March 2025 01:29:30 +0000 (0:00:00.545) 0:04:10.837 ********* 2025-03-11 01:33:52.429708 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.429718 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.429728 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.429739 | orchestrator | 2025-03-11 01:33:52.429749 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-11 01:33:52.429759 | orchestrator | Tuesday 11 March 2025 01:29:32 +0000 (0:00:02.002) 0:04:12.840 ********* 2025-03-11 01:33:52.429769 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.429780 | orchestrator | 2025-03-11 01:33:52.429790 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-11 01:33:52.429800 | orchestrator | Tuesday 11 March 2025 01:29:33 +0000 (0:00:01.217) 0:04:14.057 ********* 2025-03-11 01:33:52.429811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:33:52.429843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:33:52.429863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:33:52.429880 | orchestrator | 2025-03-11 01:33:52.429890 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-11 01:33:52.429901 | orchestrator | Tuesday 11 March 2025 01:29:39 +0000 (0:00:05.691) 0:04:19.749 ********* 2025-03-11 01:33:52.429924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:33:52.429945 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.429956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:33:52.429974 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.429991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:33:52.430060 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430074 | orchestrator | 2025-03-11 01:33:52.430085 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-11 01:33:52.430096 | orchestrator | Tuesday 11 March 2025 01:29:40 +0000 (0:00:00.986) 0:04:20.735 ********* 2025-03-11 01:33:52.430108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:33:52.430165 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:33:52.430244 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:33:52.430279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:33:52.430288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:33:52.430297 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430306 | orchestrator | 2025-03-11 01:33:52.430315 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-11 01:33:52.430402 | orchestrator | Tuesday 11 March 2025 01:29:41 +0000 (0:00:01.600) 0:04:22.336 ********* 2025-03-11 01:33:52.430411 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430420 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430428 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430437 | orchestrator | 2025-03-11 01:33:52.430446 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-11 01:33:52.430454 | orchestrator | Tuesday 11 March 2025 01:29:42 +0000 (0:00:00.632) 0:04:22.968 ********* 2025-03-11 01:33:52.430463 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430471 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430480 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430488 | orchestrator | 2025-03-11 01:33:52.430497 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-11 01:33:52.430505 | orchestrator | Tuesday 11 March 2025 01:29:43 +0000 (0:00:01.606) 0:04:24.575 ********* 2025-03-11 01:33:52.430514 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430522 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430531 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430539 | orchestrator | 2025-03-11 01:33:52.430548 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-11 01:33:52.430556 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:00.554) 0:04:25.129 ********* 2025-03-11 01:33:52.430565 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430573 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430586 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430594 | orchestrator | 2025-03-11 01:33:52.430603 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-11 01:33:52.430626 | orchestrator | Tuesday 11 March 2025 01:29:44 +0000 (0:00:00.343) 0:04:25.473 ********* 2025-03-11 01:33:52.430636 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.430644 | orchestrator | 2025-03-11 01:33:52.430653 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-11 01:33:52.430667 | orchestrator | Tuesday 11 March 2025 01:29:46 +0000 (0:00:01.601) 0:04:27.075 ********* 2025-03-11 01:33:52.430681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-11 01:33:52.430692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-11 01:33:52.430721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-11 01:33:52.430763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430782 | orchestrator | 2025-03-11 01:33:52.430790 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-11 01:33:52.430799 | orchestrator | Tuesday 11 March 2025 01:29:51 +0000 (0:00:04.766) 0:04:31.842 ********* 2025-03-11 01:33:52.430808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-11 01:33:52.430828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430846 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-11 01:33:52.430865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430888 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.430901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-11 01:33:52.430911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:33:52.430920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:33:52.430929 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.430937 | orchestrator | 2025-03-11 01:33:52.430946 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-11 01:33:52.430954 | orchestrator | Tuesday 11 March 2025 01:29:52 +0000 (0:00:01.516) 0:04:33.358 ********* 2025-03-11 01:33:52.430963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.430975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.430984 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.430992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.431002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.431017 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.431035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-11 01:33:52.431044 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431052 | orchestrator | 2025-03-11 01:33:52.431061 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-11 01:33:52.431070 | orchestrator | Tuesday 11 March 2025 01:29:53 +0000 (0:00:01.076) 0:04:34.435 ********* 2025-03-11 01:33:52.431078 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431087 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431095 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431104 | orchestrator | 2025-03-11 01:33:52.431112 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-11 01:33:52.431121 | orchestrator | Tuesday 11 March 2025 01:29:54 +0000 (0:00:00.557) 0:04:34.993 ********* 2025-03-11 01:33:52.431129 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431138 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431146 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431155 | orchestrator | 2025-03-11 01:33:52.431167 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-11 01:33:52.431180 | orchestrator | Tuesday 11 March 2025 01:29:55 +0000 (0:00:01.579) 0:04:36.572 ********* 2025-03-11 01:33:52.431189 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431197 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431206 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431215 | orchestrator | 2025-03-11 01:33:52.431223 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-11 01:33:52.431232 | orchestrator | Tuesday 11 March 2025 01:29:56 +0000 (0:00:00.390) 0:04:36.962 ********* 2025-03-11 01:33:52.431241 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.431249 | orchestrator | 2025-03-11 01:33:52.431258 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-11 01:33:52.431266 | orchestrator | Tuesday 11 March 2025 01:29:57 +0000 (0:00:01.518) 0:04:38.481 ********* 2025-03-11 01:33:52.431275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:33:52.431284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:33:52.431300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:33:52.431335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431344 | orchestrator | 2025-03-11 01:33:52.431353 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-11 01:33:52.431362 | orchestrator | Tuesday 11 March 2025 01:30:03 +0000 (0:00:05.844) 0:04:44.326 ********* 2025-03-11 01:33:52.431370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:33:52.431384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431394 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:33:52.431415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431424 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:33:52.431446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431456 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431464 | orchestrator | 2025-03-11 01:33:52.431473 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-11 01:33:52.431482 | orchestrator | Tuesday 11 March 2025 01:30:04 +0000 (0:00:01.218) 0:04:45.544 ********* 2025-03-11 01:33:52.431491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431509 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431535 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:33:52.431561 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431569 | orchestrator | 2025-03-11 01:33:52.431582 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-11 01:33:52.431591 | orchestrator | Tuesday 11 March 2025 01:30:06 +0000 (0:00:01.968) 0:04:47.513 ********* 2025-03-11 01:33:52.431599 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431608 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431629 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431638 | orchestrator | 2025-03-11 01:33:52.431647 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-11 01:33:52.431655 | orchestrator | Tuesday 11 March 2025 01:30:07 +0000 (0:00:00.629) 0:04:48.143 ********* 2025-03-11 01:33:52.431664 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.431672 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.431681 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.431689 | orchestrator | 2025-03-11 01:33:52.431698 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-11 01:33:52.431711 | orchestrator | Tuesday 11 March 2025 01:30:09 +0000 (0:00:01.969) 0:04:50.112 ********* 2025-03-11 01:33:52.431720 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.431729 | orchestrator | 2025-03-11 01:33:52.431737 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-11 01:33:52.431746 | orchestrator | Tuesday 11 March 2025 01:30:11 +0000 (0:00:02.105) 0:04:52.218 ********* 2025-03-11 01:33:52.431774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:33:52.431784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:33:52.431834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:33:52.431870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431908 | orchestrator | 2025-03-11 01:33:52.431917 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-11 01:33:52.431925 | orchestrator | Tuesday 11 March 2025 01:30:16 +0000 (0:00:05.224) 0:04:57.443 ********* 2025-03-11 01:33:52.431934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:33:52.431943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:33:52.431965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.431997 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.432015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.432024 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:33:52.432042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.432062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.432071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.432080 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432089 | orchestrator | 2025-03-11 01:33:52.432098 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-11 01:33:52.432106 | orchestrator | Tuesday 11 March 2025 01:30:17 +0000 (0:00:00.965) 0:04:58.409 ********* 2025-03-11 01:33:52.432115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432132 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432159 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:33:52.432185 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432194 | orchestrator | 2025-03-11 01:33:52.432202 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-11 01:33:52.432211 | orchestrator | Tuesday 11 March 2025 01:30:19 +0000 (0:00:01.348) 0:04:59.758 ********* 2025-03-11 01:33:52.432219 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432228 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432236 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432245 | orchestrator | 2025-03-11 01:33:52.432254 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-11 01:33:52.432267 | orchestrator | Tuesday 11 March 2025 01:30:19 +0000 (0:00:00.564) 0:05:00.323 ********* 2025-03-11 01:33:52.432276 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432284 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432293 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432301 | orchestrator | 2025-03-11 01:33:52.432310 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-11 01:33:52.432319 | orchestrator | Tuesday 11 March 2025 01:30:21 +0000 (0:00:01.666) 0:05:01.990 ********* 2025-03-11 01:33:52.432327 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.432336 | orchestrator | 2025-03-11 01:33:52.432345 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-11 01:33:52.432353 | orchestrator | Tuesday 11 March 2025 01:30:23 +0000 (0:00:01.735) 0:05:03.725 ********* 2025-03-11 01:33:52.432362 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-11 01:33:52.432370 | orchestrator | 2025-03-11 01:33:52.432379 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-11 01:33:52.432391 | orchestrator | Tuesday 11 March 2025 01:30:26 +0000 (0:00:03.638) 0:05:07.364 ********* 2025-03-11 01:33:52.432407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:33:52.432417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:33:52.432446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:33:52.432467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432480 | orchestrator | 2025-03-11 01:33:52.432489 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-11 01:33:52.432497 | orchestrator | Tuesday 11 March 2025 01:30:31 +0000 (0:00:04.597) 0:05:11.962 ********* 2025-03-11 01:33:52.432513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:33:52.432668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432686 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:33:52.432711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432720 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:33:52.432802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:33:52.432816 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432825 | orchestrator | 2025-03-11 01:33:52.432834 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-11 01:33:52.432843 | orchestrator | Tuesday 11 March 2025 01:30:34 +0000 (0:00:03.377) 0:05:15.340 ********* 2025-03-11 01:33:52.432852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432870 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.432927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432949 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.432958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:33:52.432981 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.432990 | orchestrator | 2025-03-11 01:33:52.432999 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-11 01:33:52.433008 | orchestrator | Tuesday 11 March 2025 01:30:38 +0000 (0:00:04.129) 0:05:19.469 ********* 2025-03-11 01:33:52.433016 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433024 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433032 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433040 | orchestrator | 2025-03-11 01:33:52.433048 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-11 01:33:52.433056 | orchestrator | Tuesday 11 March 2025 01:30:39 +0000 (0:00:00.684) 0:05:20.154 ********* 2025-03-11 01:33:52.433065 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433073 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433086 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433094 | orchestrator | 2025-03-11 01:33:52.433102 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-11 01:33:52.433110 | orchestrator | Tuesday 11 March 2025 01:30:41 +0000 (0:00:01.710) 0:05:21.865 ********* 2025-03-11 01:33:52.433118 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433126 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433134 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433142 | orchestrator | 2025-03-11 01:33:52.433150 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-11 01:33:52.433159 | orchestrator | Tuesday 11 March 2025 01:30:41 +0000 (0:00:00.429) 0:05:22.294 ********* 2025-03-11 01:33:52.433167 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.433175 | orchestrator | 2025-03-11 01:33:52.433183 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-11 01:33:52.433191 | orchestrator | Tuesday 11 March 2025 01:30:43 +0000 (0:00:01.836) 0:05:24.131 ********* 2025-03-11 01:33:52.433210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:33:52.433264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:33:52.433276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:33:52.433290 | orchestrator | 2025-03-11 01:33:52.433298 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-11 01:33:52.433307 | orchestrator | Tuesday 11 March 2025 01:30:45 +0000 (0:00:01.922) 0:05:26.053 ********* 2025-03-11 01:33:52.433315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:33:52.433323 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:33:52.433340 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:33:52.433364 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433372 | orchestrator | 2025-03-11 01:33:52.433425 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-11 01:33:52.433436 | orchestrator | Tuesday 11 March 2025 01:30:46 +0000 (0:00:00.705) 0:05:26.758 ********* 2025-03-11 01:33:52.433444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:33:52.433453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:33:52.433470 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433478 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:33:52.433494 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433502 | orchestrator | 2025-03-11 01:33:52.433510 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-11 01:33:52.433518 | orchestrator | Tuesday 11 March 2025 01:30:47 +0000 (0:00:00.915) 0:05:27.673 ********* 2025-03-11 01:33:52.433526 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433533 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433541 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433549 | orchestrator | 2025-03-11 01:33:52.433557 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-11 01:33:52.433565 | orchestrator | Tuesday 11 March 2025 01:30:47 +0000 (0:00:00.646) 0:05:28.320 ********* 2025-03-11 01:33:52.433573 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433581 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433589 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433597 | orchestrator | 2025-03-11 01:33:52.433605 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-11 01:33:52.433624 | orchestrator | Tuesday 11 March 2025 01:30:49 +0000 (0:00:01.737) 0:05:30.057 ********* 2025-03-11 01:33:52.433633 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.433641 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.433649 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.433657 | orchestrator | 2025-03-11 01:33:52.433665 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-11 01:33:52.433673 | orchestrator | Tuesday 11 March 2025 01:30:49 +0000 (0:00:00.380) 0:05:30.438 ********* 2025-03-11 01:33:52.433681 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.433689 | orchestrator | 2025-03-11 01:33:52.433697 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-11 01:33:52.433704 | orchestrator | Tuesday 11 March 2025 01:30:51 +0000 (0:00:01.785) 0:05:32.224 ********* 2025-03-11 01:33:52.433713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:33:52.433722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.433812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.433839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.433897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.433918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.433927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.433935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.433944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.434042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.434051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:33:52.434068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.434204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.434316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.434333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.434415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.434427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:33:52.434445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.434540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.434685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.434702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.434789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.434801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434809 | orchestrator | 2025-03-11 01:33:52.434818 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-11 01:33:52.434827 | orchestrator | Tuesday 11 March 2025 01:30:57 +0000 (0:00:05.688) 0:05:37.913 ********* 2025-03-11 01:33:52.434836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:33:52.434844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.434945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.434984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.434992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.435066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.435103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:33:52.435163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435183 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.435191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.435266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.435373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.435409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435424 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.435472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:33:52.435482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:33:52.435563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.435684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:33:52.435692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:33:52.435719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:33:52.435726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.435733 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.435740 | orchestrator | 2025-03-11 01:33:52.435748 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-11 01:33:52.435755 | orchestrator | Tuesday 11 March 2025 01:30:59 +0000 (0:00:02.370) 0:05:40.283 ********* 2025-03-11 01:33:52.435762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435795 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.435806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435824 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.435832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:33:52.435847 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.435854 | orchestrator | 2025-03-11 01:33:52.435861 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-11 01:33:52.435868 | orchestrator | Tuesday 11 March 2025 01:31:01 +0000 (0:00:02.326) 0:05:42.610 ********* 2025-03-11 01:33:52.435875 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.435882 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.435889 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.435896 | orchestrator | 2025-03-11 01:33:52.435903 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-11 01:33:52.435910 | orchestrator | Tuesday 11 March 2025 01:31:02 +0000 (0:00:00.618) 0:05:43.228 ********* 2025-03-11 01:33:52.435917 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.435924 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.435931 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.435938 | orchestrator | 2025-03-11 01:33:52.435945 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-11 01:33:52.435952 | orchestrator | Tuesday 11 March 2025 01:31:04 +0000 (0:00:01.695) 0:05:44.924 ********* 2025-03-11 01:33:52.435959 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.435966 | orchestrator | 2025-03-11 01:33:52.435973 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-11 01:33:52.435980 | orchestrator | Tuesday 11 March 2025 01:31:06 +0000 (0:00:01.875) 0:05:46.799 ********* 2025-03-11 01:33:52.435987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.435995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.436025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.436038 | orchestrator | 2025-03-11 01:33:52.436046 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-11 01:33:52.436053 | orchestrator | Tuesday 11 March 2025 01:31:11 +0000 (0:00:04.946) 0:05:51.745 ********* 2025-03-11 01:33:52.436060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436067 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436081 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436096 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436106 | orchestrator | 2025-03-11 01:33:52.436113 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-11 01:33:52.436120 | orchestrator | Tuesday 11 March 2025 01:31:11 +0000 (0:00:00.634) 0:05:52.380 ********* 2025-03-11 01:33:52.436127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436158 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436180 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436201 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436208 | orchestrator | 2025-03-11 01:33:52.436215 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-11 01:33:52.436222 | orchestrator | Tuesday 11 March 2025 01:31:13 +0000 (0:00:01.318) 0:05:53.699 ********* 2025-03-11 01:33:52.436229 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436236 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436242 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436249 | orchestrator | 2025-03-11 01:33:52.436256 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-11 01:33:52.436263 | orchestrator | Tuesday 11 March 2025 01:31:13 +0000 (0:00:00.332) 0:05:54.031 ********* 2025-03-11 01:33:52.436270 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436277 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436284 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436291 | orchestrator | 2025-03-11 01:33:52.436298 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-11 01:33:52.436305 | orchestrator | Tuesday 11 March 2025 01:31:15 +0000 (0:00:01.697) 0:05:55.728 ********* 2025-03-11 01:33:52.436312 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.436319 | orchestrator | 2025-03-11 01:33:52.436326 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-11 01:33:52.436333 | orchestrator | Tuesday 11 March 2025 01:31:16 +0000 (0:00:01.875) 0:05:57.604 ********* 2025-03-11 01:33:52.436345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.436357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.436400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.436450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436465 | orchestrator | 2025-03-11 01:33:52.436472 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-11 01:33:52.436479 | orchestrator | Tuesday 11 March 2025 01:31:24 +0000 (0:00:07.284) 0:06:04.889 ********* 2025-03-11 01:33:52.436486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436519 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436576 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.436606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.436637 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436644 | orchestrator | 2025-03-11 01:33:52.436651 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-11 01:33:52.436658 | orchestrator | Tuesday 11 March 2025 01:31:25 +0000 (0:00:01.339) 0:06:06.228 ********* 2025-03-11 01:33:52.436666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436699 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436734 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:33:52.436770 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436777 | orchestrator | 2025-03-11 01:33:52.436784 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-11 01:33:52.436791 | orchestrator | Tuesday 11 March 2025 01:31:27 +0000 (0:00:01.561) 0:06:07.790 ********* 2025-03-11 01:33:52.436798 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436805 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436815 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436822 | orchestrator | 2025-03-11 01:33:52.436829 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-11 01:33:52.436851 | orchestrator | Tuesday 11 March 2025 01:31:27 +0000 (0:00:00.654) 0:06:08.445 ********* 2025-03-11 01:33:52.436859 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.436866 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.436873 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.436880 | orchestrator | 2025-03-11 01:33:52.436887 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-11 01:33:52.436897 | orchestrator | Tuesday 11 March 2025 01:31:29 +0000 (0:00:01.745) 0:06:10.191 ********* 2025-03-11 01:33:52.436904 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.436911 | orchestrator | 2025-03-11 01:33:52.436918 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-11 01:33:52.436925 | orchestrator | Tuesday 11 March 2025 01:31:31 +0000 (0:00:01.977) 0:06:12.168 ********* 2025-03-11 01:33:52.436932 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-03-11 01:33:52.436939 | orchestrator | 2025-03-11 01:33:52.436946 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-11 01:33:52.436953 | orchestrator | Tuesday 11 March 2025 01:31:33 +0000 (0:00:01.498) 0:06:13.666 ********* 2025-03-11 01:33:52.436964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:33:52.436972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:33:52.436980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:33:52.436987 | orchestrator | 2025-03-11 01:33:52.436994 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-11 01:33:52.437002 | orchestrator | Tuesday 11 March 2025 01:31:39 +0000 (0:00:06.379) 0:06:20.045 ********* 2025-03-11 01:33:52.437015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437022 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437037 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437068 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437075 | orchestrator | 2025-03-11 01:33:52.437082 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-11 01:33:52.437089 | orchestrator | Tuesday 11 March 2025 01:31:41 +0000 (0:00:02.595) 0:06:22.641 ********* 2025-03-11 01:33:52.437096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437122 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437138 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:33:52.437160 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437167 | orchestrator | 2025-03-11 01:33:52.437174 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:33:52.437181 | orchestrator | Tuesday 11 March 2025 01:31:44 +0000 (0:00:02.462) 0:06:25.103 ********* 2025-03-11 01:33:52.437188 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437194 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437202 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437208 | orchestrator | 2025-03-11 01:33:52.437215 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:33:52.437222 | orchestrator | Tuesday 11 March 2025 01:31:45 +0000 (0:00:00.632) 0:06:25.735 ********* 2025-03-11 01:33:52.437229 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437236 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437243 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437250 | orchestrator | 2025-03-11 01:33:52.437257 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-11 01:33:52.437264 | orchestrator | Tuesday 11 March 2025 01:31:46 +0000 (0:00:01.299) 0:06:27.035 ********* 2025-03-11 01:33:52.437271 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-11 01:33:52.437278 | orchestrator | 2025-03-11 01:33:52.437285 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-11 01:33:52.437292 | orchestrator | Tuesday 11 March 2025 01:31:47 +0000 (0:00:01.210) 0:06:28.245 ********* 2025-03-11 01:33:52.437299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437306 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437324 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437354 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437361 | orchestrator | 2025-03-11 01:33:52.437368 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-11 01:33:52.437376 | orchestrator | Tuesday 11 March 2025 01:31:49 +0000 (0:00:01.729) 0:06:29.975 ********* 2025-03-11 01:33:52.437383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437390 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437410 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:33:52.437425 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437432 | orchestrator | 2025-03-11 01:33:52.437439 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-11 01:33:52.437446 | orchestrator | Tuesday 11 March 2025 01:31:51 +0000 (0:00:02.063) 0:06:32.038 ********* 2025-03-11 01:33:52.437453 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437460 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437467 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437474 | orchestrator | 2025-03-11 01:33:52.437481 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:33:52.437488 | orchestrator | Tuesday 11 March 2025 01:31:53 +0000 (0:00:02.494) 0:06:34.533 ********* 2025-03-11 01:33:52.437495 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437508 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437515 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437522 | orchestrator | 2025-03-11 01:33:52.437529 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:33:52.437536 | orchestrator | Tuesday 11 March 2025 01:31:54 +0000 (0:00:00.645) 0:06:35.179 ********* 2025-03-11 01:33:52.437543 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437550 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437557 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437563 | orchestrator | 2025-03-11 01:33:52.437570 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-11 01:33:52.437577 | orchestrator | Tuesday 11 March 2025 01:31:55 +0000 (0:00:01.340) 0:06:36.519 ********* 2025-03-11 01:33:52.437584 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-11 01:33:52.437591 | orchestrator | 2025-03-11 01:33:52.437598 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-11 01:33:52.437605 | orchestrator | Tuesday 11 March 2025 01:31:58 +0000 (0:00:02.204) 0:06:38.724 ********* 2025-03-11 01:33:52.437640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437649 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437664 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437678 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437685 | orchestrator | 2025-03-11 01:33:52.437692 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-11 01:33:52.437699 | orchestrator | Tuesday 11 March 2025 01:32:00 +0000 (0:00:02.480) 0:06:41.205 ********* 2025-03-11 01:33:52.437706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437713 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437734 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:33:52.437754 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437761 | orchestrator | 2025-03-11 01:33:52.437768 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-11 01:33:52.437775 | orchestrator | Tuesday 11 March 2025 01:32:02 +0000 (0:00:02.317) 0:06:43.522 ********* 2025-03-11 01:33:52.437782 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437789 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437796 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437803 | orchestrator | 2025-03-11 01:33:52.437810 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:33:52.437820 | orchestrator | Tuesday 11 March 2025 01:32:05 +0000 (0:00:02.611) 0:06:46.133 ********* 2025-03-11 01:33:52.437827 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437834 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437841 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437848 | orchestrator | 2025-03-11 01:33:52.437855 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:33:52.437862 | orchestrator | Tuesday 11 March 2025 01:32:06 +0000 (0:00:00.682) 0:06:46.816 ********* 2025-03-11 01:33:52.437869 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.437876 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.437898 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.437906 | orchestrator | 2025-03-11 01:33:52.437913 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-11 01:33:52.437920 | orchestrator | Tuesday 11 March 2025 01:32:07 +0000 (0:00:01.522) 0:06:48.339 ********* 2025-03-11 01:33:52.437927 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.437934 | orchestrator | 2025-03-11 01:33:52.437941 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-11 01:33:52.437948 | orchestrator | Tuesday 11 March 2025 01:32:09 +0000 (0:00:02.058) 0:06:50.397 ********* 2025-03-11 01:33:52.437956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.437969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.437977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.437985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.437997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.438050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.438063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.438114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.438123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438155 | orchestrator | 2025-03-11 01:33:52.438162 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-11 01:33:52.438170 | orchestrator | Tuesday 11 March 2025 01:32:15 +0000 (0:00:05.552) 0:06:55.950 ********* 2025-03-11 01:33:52.438177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.438184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.438207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438235 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.438255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.438262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438304 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.438324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:33:52.438332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:33:52.438361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:33:52.438370 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438377 | orchestrator | 2025-03-11 01:33:52.438384 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-11 01:33:52.438396 | orchestrator | Tuesday 11 March 2025 01:32:16 +0000 (0:00:01.233) 0:06:57.183 ********* 2025-03-11 01:33:52.438403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438418 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438439 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:33:52.438463 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438471 | orchestrator | 2025-03-11 01:33:52.438479 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-11 01:33:52.438486 | orchestrator | Tuesday 11 March 2025 01:32:18 +0000 (0:00:01.569) 0:06:58.753 ********* 2025-03-11 01:33:52.438494 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438501 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438508 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438515 | orchestrator | 2025-03-11 01:33:52.438522 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-11 01:33:52.438529 | orchestrator | Tuesday 11 March 2025 01:32:18 +0000 (0:00:00.679) 0:06:59.433 ********* 2025-03-11 01:33:52.438536 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438543 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438550 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438556 | orchestrator | 2025-03-11 01:33:52.438563 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-11 01:33:52.438570 | orchestrator | Tuesday 11 March 2025 01:32:20 +0000 (0:00:01.845) 0:07:01.278 ********* 2025-03-11 01:33:52.438577 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.438584 | orchestrator | 2025-03-11 01:33:52.438591 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-11 01:33:52.438598 | orchestrator | Tuesday 11 March 2025 01:32:22 +0000 (0:00:01.987) 0:07:03.266 ********* 2025-03-11 01:33:52.438605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:33:52.438682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:33:52.438693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:33:52.438700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:33:52.438708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:33:52.438742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:33:52.438751 | orchestrator | 2025-03-11 01:33:52.438758 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-11 01:33:52.438765 | orchestrator | Tuesday 11 March 2025 01:32:29 +0000 (0:00:07.087) 0:07:10.354 ********* 2025-03-11 01:33:52.438773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:33:52.438780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:33:52.438793 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:33:52.438828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:33:52.438837 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:33:52.438851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:33:52.438861 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438868 | orchestrator | 2025-03-11 01:33:52.438876 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-11 01:33:52.438883 | orchestrator | Tuesday 11 March 2025 01:32:30 +0000 (0:00:01.029) 0:07:11.383 ********* 2025-03-11 01:33:52.438890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:33:52.438897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438916 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.438923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:33:52.438931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438945 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.438970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:33:52.438978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:33:52.438993 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.438999 | orchestrator | 2025-03-11 01:33:52.439007 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-11 01:33:52.439014 | orchestrator | Tuesday 11 March 2025 01:32:32 +0000 (0:00:02.223) 0:07:13.606 ********* 2025-03-11 01:33:52.439021 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439028 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439035 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439042 | orchestrator | 2025-03-11 01:33:52.439049 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-11 01:33:52.439056 | orchestrator | Tuesday 11 March 2025 01:32:33 +0000 (0:00:00.371) 0:07:13.978 ********* 2025-03-11 01:33:52.439063 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439070 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439077 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439084 | orchestrator | 2025-03-11 01:33:52.439091 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-11 01:33:52.439098 | orchestrator | Tuesday 11 March 2025 01:32:35 +0000 (0:00:01.828) 0:07:15.806 ********* 2025-03-11 01:33:52.439105 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.439112 | orchestrator | 2025-03-11 01:33:52.439119 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-11 01:33:52.439129 | orchestrator | Tuesday 11 March 2025 01:32:37 +0000 (0:00:02.184) 0:07:17.991 ********* 2025-03-11 01:33:52.439136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:33:52.439149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:33:52.439202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:33:52.439257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:33:52.439300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:33:52.439367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:33:52.439419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439464 | orchestrator | 2025-03-11 01:33:52.439470 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-11 01:33:52.439476 | orchestrator | Tuesday 11 March 2025 01:32:43 +0000 (0:00:05.855) 0:07:23.847 ********* 2025-03-11 01:33:52.439483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:33:52.439489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:33:52.439534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:33:52.439566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439597 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:33:52.439655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:33:52.439674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:33:52.439690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439719 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:33:52.439752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:33:52.439765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:33:52.439784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:33:52.439790 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439797 | orchestrator | 2025-03-11 01:33:52.439803 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-11 01:33:52.439809 | orchestrator | Tuesday 11 March 2025 01:32:44 +0000 (0:00:01.409) 0:07:25.256 ********* 2025-03-11 01:33:52.439816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439841 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439881 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:33:52.439904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:33:52.439917 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439924 | orchestrator | 2025-03-11 01:33:52.439930 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-11 01:33:52.439936 | orchestrator | Tuesday 11 March 2025 01:32:46 +0000 (0:00:02.120) 0:07:27.377 ********* 2025-03-11 01:33:52.439942 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439949 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439955 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439961 | orchestrator | 2025-03-11 01:33:52.439967 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-11 01:33:52.439973 | orchestrator | Tuesday 11 March 2025 01:32:47 +0000 (0:00:00.649) 0:07:28.026 ********* 2025-03-11 01:33:52.439980 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.439986 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.439992 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.439998 | orchestrator | 2025-03-11 01:33:52.440004 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-11 01:33:52.440011 | orchestrator | Tuesday 11 March 2025 01:32:49 +0000 (0:00:01.808) 0:07:29.835 ********* 2025-03-11 01:33:52.440017 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.440023 | orchestrator | 2025-03-11 01:33:52.440029 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-11 01:33:52.440035 | orchestrator | Tuesday 11 March 2025 01:32:51 +0000 (0:00:02.176) 0:07:32.011 ********* 2025-03-11 01:33:52.440041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:33:52.440062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:33:52.440069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:33:52.440076 | orchestrator | 2025-03-11 01:33:52.440082 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-11 01:33:52.440088 | orchestrator | Tuesday 11 March 2025 01:32:54 +0000 (0:00:03.300) 0:07:35.312 ********* 2025-03-11 01:33:52.440095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:33:52.440101 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:33:52.440123 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:33:52.440140 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440146 | orchestrator | 2025-03-11 01:33:52.440152 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-11 01:33:52.440159 | orchestrator | Tuesday 11 March 2025 01:32:55 +0000 (0:00:00.466) 0:07:35.778 ********* 2025-03-11 01:33:52.440165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:33:52.440172 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:33:52.440184 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:33:52.440197 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440203 | orchestrator | 2025-03-11 01:33:52.440209 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-11 01:33:52.440215 | orchestrator | Tuesday 11 March 2025 01:32:56 +0000 (0:00:01.291) 0:07:37.069 ********* 2025-03-11 01:33:52.440221 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440228 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440234 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440240 | orchestrator | 2025-03-11 01:33:52.440246 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-11 01:33:52.440252 | orchestrator | Tuesday 11 March 2025 01:32:56 +0000 (0:00:00.370) 0:07:37.440 ********* 2025-03-11 01:33:52.440258 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440265 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440271 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440346 | orchestrator | 2025-03-11 01:33:52.440353 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-11 01:33:52.440360 | orchestrator | Tuesday 11 March 2025 01:32:58 +0000 (0:00:01.858) 0:07:39.298 ********* 2025-03-11 01:33:52.440366 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:33:52.440372 | orchestrator | 2025-03-11 01:33:52.440378 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-11 01:33:52.440384 | orchestrator | Tuesday 11 March 2025 01:33:00 +0000 (0:00:02.213) 0:07:41.512 ********* 2025-03-11 01:33:52.440391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:33:52.440438 | orchestrator | 2025-03-11 01:33:52.440444 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-11 01:33:52.440451 | orchestrator | Tuesday 11 March 2025 01:33:10 +0000 (0:00:09.532) 0:07:51.045 ********* 2025-03-11 01:33:52.440460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440478 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440497 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:33:52.440523 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440529 | orchestrator | 2025-03-11 01:33:52.440535 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-11 01:33:52.440542 | orchestrator | Tuesday 11 March 2025 01:33:11 +0000 (0:00:01.225) 0:07:52.270 ********* 2025-03-11 01:33:52.440548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440573 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440604 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:33:52.440650 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440656 | orchestrator | 2025-03-11 01:33:52.440663 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-11 01:33:52.440669 | orchestrator | Tuesday 11 March 2025 01:33:13 +0000 (0:00:01.787) 0:07:54.058 ********* 2025-03-11 01:33:52.440679 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440685 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440694 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440701 | orchestrator | 2025-03-11 01:33:52.440707 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-11 01:33:52.440713 | orchestrator | Tuesday 11 March 2025 01:33:14 +0000 (0:00:00.683) 0:07:54.742 ********* 2025-03-11 01:33:52.440719 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440725 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440732 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440738 | orchestrator | 2025-03-11 01:33:52.440744 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-11 01:33:52.440750 | orchestrator | Tuesday 11 March 2025 01:33:16 +0000 (0:00:01.928) 0:07:56.670 ********* 2025-03-11 01:33:52.440757 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440763 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440769 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440775 | orchestrator | 2025-03-11 01:33:52.440781 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-11 01:33:52.440787 | orchestrator | Tuesday 11 March 2025 01:33:16 +0000 (0:00:00.661) 0:07:57.332 ********* 2025-03-11 01:33:52.440794 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440800 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440806 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440812 | orchestrator | 2025-03-11 01:33:52.440818 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-11 01:33:52.440825 | orchestrator | Tuesday 11 March 2025 01:33:17 +0000 (0:00:00.348) 0:07:57.681 ********* 2025-03-11 01:33:52.440831 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440837 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440843 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440849 | orchestrator | 2025-03-11 01:33:52.440855 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-11 01:33:52.440862 | orchestrator | Tuesday 11 March 2025 01:33:17 +0000 (0:00:00.637) 0:07:58.318 ********* 2025-03-11 01:33:52.440868 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440874 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440880 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440886 | orchestrator | 2025-03-11 01:33:52.440893 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-11 01:33:52.440899 | orchestrator | Tuesday 11 March 2025 01:33:18 +0000 (0:00:00.652) 0:07:58.970 ********* 2025-03-11 01:33:52.440905 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440911 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440917 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440923 | orchestrator | 2025-03-11 01:33:52.440930 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-11 01:33:52.440936 | orchestrator | Tuesday 11 March 2025 01:33:18 +0000 (0:00:00.622) 0:07:59.592 ********* 2025-03-11 01:33:52.440942 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.440948 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.440955 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.440961 | orchestrator | 2025-03-11 01:33:52.440967 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-11 01:33:52.440973 | orchestrator | Tuesday 11 March 2025 01:33:19 +0000 (0:00:00.872) 0:08:00.464 ********* 2025-03-11 01:33:52.440979 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.440986 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.440992 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.440998 | orchestrator | 2025-03-11 01:33:52.441004 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-11 01:33:52.441011 | orchestrator | Tuesday 11 March 2025 01:33:20 +0000 (0:00:01.051) 0:08:01.515 ********* 2025-03-11 01:33:52.441022 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441029 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441035 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441041 | orchestrator | 2025-03-11 01:33:52.441047 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-11 01:33:52.441054 | orchestrator | Tuesday 11 March 2025 01:33:21 +0000 (0:00:00.401) 0:08:01.917 ********* 2025-03-11 01:33:52.441060 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441066 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441072 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441078 | orchestrator | 2025-03-11 01:33:52.441084 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-11 01:33:52.441091 | orchestrator | Tuesday 11 March 2025 01:33:22 +0000 (0:00:01.481) 0:08:03.399 ********* 2025-03-11 01:33:52.441097 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441103 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441109 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441115 | orchestrator | 2025-03-11 01:33:52.441121 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-11 01:33:52.441127 | orchestrator | Tuesday 11 March 2025 01:33:24 +0000 (0:00:01.406) 0:08:04.806 ********* 2025-03-11 01:33:52.441134 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441140 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441146 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441152 | orchestrator | 2025-03-11 01:33:52.441160 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-11 01:33:52.441167 | orchestrator | Tuesday 11 March 2025 01:33:25 +0000 (0:00:01.362) 0:08:06.168 ********* 2025-03-11 01:33:52.441173 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:33:52.441179 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:33:52.441185 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:33:52.441192 | orchestrator | 2025-03-11 01:33:52.441198 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-11 01:33:52.441204 | orchestrator | Tuesday 11 March 2025 01:33:30 +0000 (0:00:05.465) 0:08:11.634 ********* 2025-03-11 01:33:52.441210 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441216 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441223 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441229 | orchestrator | 2025-03-11 01:33:52.441235 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-11 01:33:52.441241 | orchestrator | Tuesday 11 March 2025 01:33:34 +0000 (0:00:03.238) 0:08:14.872 ********* 2025-03-11 01:33:52.441247 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441254 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441260 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441266 | orchestrator | 2025-03-11 01:33:52.441272 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-11 01:33:52.441281 | orchestrator | Tuesday 11 March 2025 01:33:35 +0000 (0:00:01.171) 0:08:16.044 ********* 2025-03-11 01:33:52.441288 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:33:52.441294 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:33:52.441300 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:33:52.441306 | orchestrator | 2025-03-11 01:33:52.441312 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-11 01:33:52.441318 | orchestrator | Tuesday 11 March 2025 01:33:40 +0000 (0:00:05.558) 0:08:21.602 ********* 2025-03-11 01:33:52.441325 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441331 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441340 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441346 | orchestrator | 2025-03-11 01:33:52.441352 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-11 01:33:52.441359 | orchestrator | Tuesday 11 March 2025 01:33:41 +0000 (0:00:00.408) 0:08:22.011 ********* 2025-03-11 01:33:52.441365 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441371 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441381 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441387 | orchestrator | 2025-03-11 01:33:52.441393 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-11 01:33:52.441400 | orchestrator | Tuesday 11 March 2025 01:33:42 +0000 (0:00:00.677) 0:08:22.688 ********* 2025-03-11 01:33:52.441406 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441412 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441418 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441425 | orchestrator | 2025-03-11 01:33:52.441431 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-11 01:33:52.441437 | orchestrator | Tuesday 11 March 2025 01:33:42 +0000 (0:00:00.698) 0:08:23.386 ********* 2025-03-11 01:33:52.441443 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441449 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441456 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441462 | orchestrator | 2025-03-11 01:33:52.441468 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-11 01:33:52.441474 | orchestrator | Tuesday 11 March 2025 01:33:43 +0000 (0:00:00.666) 0:08:24.053 ********* 2025-03-11 01:33:52.441480 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441486 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441492 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441499 | orchestrator | 2025-03-11 01:33:52.441505 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-11 01:33:52.441511 | orchestrator | Tuesday 11 March 2025 01:33:43 +0000 (0:00:00.382) 0:08:24.435 ********* 2025-03-11 01:33:52.441517 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441524 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441530 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441536 | orchestrator | 2025-03-11 01:33:52.441542 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-11 01:33:52.441548 | orchestrator | Tuesday 11 March 2025 01:33:44 +0000 (0:00:00.726) 0:08:25.162 ********* 2025-03-11 01:33:52.441555 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:33:52.441561 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:33:52.441567 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:33:52.441573 | orchestrator | 2025-03-11 01:33:52.441579 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-11 01:33:52.441586 | orchestrator | Tuesday 11 March 2025 01:33:49 +0000 (0:00:05.288) 0:08:30.450 ********* 2025-03-11 01:33:52.441592 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:33:52.441598 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:33:52.441604 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:33:52.441621 | orchestrator | 2025-03-11 01:33:52.441627 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:33:52.441634 | orchestrator | testbed-node-0 : ok=83  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:33:52.441640 | orchestrator | testbed-node-1 : ok=82  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:33:52.441646 | orchestrator | testbed-node-2 : ok=82  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:33:52.441652 | orchestrator | 2025-03-11 01:33:52.441659 | orchestrator | 2025-03-11 01:33:52.441665 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:33:52.441671 | orchestrator | Tuesday 11 March 2025 01:33:50 +0000 (0:00:01.029) 0:08:31.480 ********* 2025-03-11 01:33:52.441677 | orchestrator | =============================================================================== 2025-03-11 01:33:52.441683 | orchestrator | haproxy-config : Copying over glance haproxy config -------------------- 11.07s 2025-03-11 01:33:52.441695 | orchestrator | haproxy-config : Copying over designate haproxy config ----------------- 10.69s 2025-03-11 01:33:55.466780 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.53s 2025-03-11 01:33:55.466927 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 9.12s 2025-03-11 01:33:55.466946 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 8.17s 2025-03-11 01:33:55.466961 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 7.97s 2025-03-11 01:33:55.466975 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.77s 2025-03-11 01:33:55.466990 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 7.65s 2025-03-11 01:33:55.467006 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.29s 2025-03-11 01:33:55.467020 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.22s 2025-03-11 01:33:55.467034 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 7.17s 2025-03-11 01:33:55.467073 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.09s 2025-03-11 01:33:55.467088 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.38s 2025-03-11 01:33:55.467102 | orchestrator | loadbalancer : Ensuring haproxy service config subdir exists ------------ 6.16s 2025-03-11 01:33:55.467116 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.16s 2025-03-11 01:33:55.467130 | orchestrator | loadbalancer : Removing checks for services which are disabled ---------- 5.93s 2025-03-11 01:33:55.467143 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.86s 2025-03-11 01:33:55.467157 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.84s 2025-03-11 01:33:55.467171 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.69s 2025-03-11 01:33:55.467186 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.69s 2025-03-11 01:33:55.467200 | orchestrator | 2025-03-11 01:33:52 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:55.467214 | orchestrator | 2025-03-11 01:33:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:55.467248 | orchestrator | 2025-03-11 01:33:55 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:55.469486 | orchestrator | 2025-03-11 01:33:55 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:33:55.470372 | orchestrator | 2025-03-11 01:33:55 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:33:58.523593 | orchestrator | 2025-03-11 01:33:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:33:58.523836 | orchestrator | 2025-03-11 01:33:58 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:33:58.525539 | orchestrator | 2025-03-11 01:33:58 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:33:58.528598 | orchestrator | 2025-03-11 01:33:58 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:01.585332 | orchestrator | 2025-03-11 01:33:58 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:01.585469 | orchestrator | 2025-03-11 01:34:01 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:01.587417 | orchestrator | 2025-03-11 01:34:01 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:01.592282 | orchestrator | 2025-03-11 01:34:01 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:04.639566 | orchestrator | 2025-03-11 01:34:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:04.639725 | orchestrator | 2025-03-11 01:34:04 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:04.640983 | orchestrator | 2025-03-11 01:34:04 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:04.644401 | orchestrator | 2025-03-11 01:34:04 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:04.645853 | orchestrator | 2025-03-11 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:07.693190 | orchestrator | 2025-03-11 01:34:07 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:07.696254 | orchestrator | 2025-03-11 01:34:07 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:07.698804 | orchestrator | 2025-03-11 01:34:07 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:10.749411 | orchestrator | 2025-03-11 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:10.749542 | orchestrator | 2025-03-11 01:34:10 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:10.750361 | orchestrator | 2025-03-11 01:34:10 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:10.750396 | orchestrator | 2025-03-11 01:34:10 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:13.809133 | orchestrator | 2025-03-11 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:13.809260 | orchestrator | 2025-03-11 01:34:13 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:13.809777 | orchestrator | 2025-03-11 01:34:13 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:13.812258 | orchestrator | 2025-03-11 01:34:13 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:16.858480 | orchestrator | 2025-03-11 01:34:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:16.858663 | orchestrator | 2025-03-11 01:34:16 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:16.859370 | orchestrator | 2025-03-11 01:34:16 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:16.861508 | orchestrator | 2025-03-11 01:34:16 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:16.862428 | orchestrator | 2025-03-11 01:34:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:19.905760 | orchestrator | 2025-03-11 01:34:19 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:19.906212 | orchestrator | 2025-03-11 01:34:19 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:19.906247 | orchestrator | 2025-03-11 01:34:19 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:22.950286 | orchestrator | 2025-03-11 01:34:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:22.950396 | orchestrator | 2025-03-11 01:34:22 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:22.951146 | orchestrator | 2025-03-11 01:34:22 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:22.952307 | orchestrator | 2025-03-11 01:34:22 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:22.955261 | orchestrator | 2025-03-11 01:34:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:25.995876 | orchestrator | 2025-03-11 01:34:25 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:25.996286 | orchestrator | 2025-03-11 01:34:25 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:25.996349 | orchestrator | 2025-03-11 01:34:25 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:29.039836 | orchestrator | 2025-03-11 01:34:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:29.039974 | orchestrator | 2025-03-11 01:34:29 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:29.040579 | orchestrator | 2025-03-11 01:34:29 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:29.040610 | orchestrator | 2025-03-11 01:34:29 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:32.095066 | orchestrator | 2025-03-11 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:32.095196 | orchestrator | 2025-03-11 01:34:32 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:32.095454 | orchestrator | 2025-03-11 01:34:32 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:32.096547 | orchestrator | 2025-03-11 01:34:32 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:35.150004 | orchestrator | 2025-03-11 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:35.150179 | orchestrator | 2025-03-11 01:34:35 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:35.159216 | orchestrator | 2025-03-11 01:34:35 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:35.159277 | orchestrator | 2025-03-11 01:34:35 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:38.200601 | orchestrator | 2025-03-11 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:38.200814 | orchestrator | 2025-03-11 01:34:38 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:38.204140 | orchestrator | 2025-03-11 01:34:38 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:41.263788 | orchestrator | 2025-03-11 01:34:38 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:41.263909 | orchestrator | 2025-03-11 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:41.263946 | orchestrator | 2025-03-11 01:34:41 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:41.265522 | orchestrator | 2025-03-11 01:34:41 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:41.271549 | orchestrator | 2025-03-11 01:34:41 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:44.319673 | orchestrator | 2025-03-11 01:34:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:44.319808 | orchestrator | 2025-03-11 01:34:44 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:44.321091 | orchestrator | 2025-03-11 01:34:44 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:44.321131 | orchestrator | 2025-03-11 01:34:44 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:47.359070 | orchestrator | 2025-03-11 01:34:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:47.359207 | orchestrator | 2025-03-11 01:34:47 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:47.359907 | orchestrator | 2025-03-11 01:34:47 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:47.363344 | orchestrator | 2025-03-11 01:34:47 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:47.363633 | orchestrator | 2025-03-11 01:34:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:50.402538 | orchestrator | 2025-03-11 01:34:50 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:50.403750 | orchestrator | 2025-03-11 01:34:50 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:50.405388 | orchestrator | 2025-03-11 01:34:50 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:53.447494 | orchestrator | 2025-03-11 01:34:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:34:53.447629 | orchestrator | 2025-03-11 01:34:53 | INFO  | Task c2f954aa-0352-4829-aee0-801e3048e1b8 is in state STARTED 2025-03-11 01:34:53.449241 | orchestrator | 2025-03-11 01:34:53 | INFO  | Task 8bcefa5f-629c-41ee-b519-45095e25468d is in state STARTED 2025-03-11 01:34:53.451603 | orchestrator | 2025-03-11 01:34:53 | INFO  | Task 0592477d-8f98-4767-88ed-92b3c91f87ab is in state STARTED 2025-03-11 01:34:56.500871 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-11 01:34:56.509899 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-11 01:34:57.282229 | 2025-03-11 01:34:57.282454 | PLAY [Post output play] 2025-03-11 01:34:57.313187 | 2025-03-11 01:34:57.313398 | LOOP [stage-output : Register sources] 2025-03-11 01:34:57.401767 | 2025-03-11 01:34:57.402095 | TASK [stage-output : Check sudo] 2025-03-11 01:34:58.120092 | orchestrator | sudo: a password is required 2025-03-11 01:34:58.447792 | orchestrator | ok: Runtime: 0:00:00.018773 2025-03-11 01:34:58.458761 | 2025-03-11 01:34:58.458909 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-11 01:34:58.506055 | 2025-03-11 01:34:58.506338 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-11 01:34:58.602541 | orchestrator | ok 2025-03-11 01:34:58.613278 | 2025-03-11 01:34:58.613412 | LOOP [stage-output : Ensure target folders exist] 2025-03-11 01:34:59.080575 | orchestrator | ok: "docs" 2025-03-11 01:34:59.082687 | 2025-03-11 01:34:59.347850 | orchestrator | ok: "artifacts" 2025-03-11 01:34:59.594322 | orchestrator | ok: "logs" 2025-03-11 01:34:59.620525 | 2025-03-11 01:34:59.620792 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-11 01:34:59.665785 | 2025-03-11 01:34:59.666066 | TASK [stage-output : Make all log files readable] 2025-03-11 01:34:59.958504 | orchestrator | ok 2025-03-11 01:34:59.969711 | 2025-03-11 01:34:59.969834 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-11 01:35:00.016145 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:00.029827 | 2025-03-11 01:35:00.029972 | TASK [stage-output : Discover log files for compression] 2025-03-11 01:35:00.056060 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:00.072946 | 2025-03-11 01:35:00.073079 | LOOP [stage-output : Archive everything from logs] 2025-03-11 01:35:00.149836 | 2025-03-11 01:35:00.150004 | PLAY [Post cleanup play] 2025-03-11 01:35:00.173986 | 2025-03-11 01:35:00.174093 | TASK [Set cloud fact (Zuul deployment)] 2025-03-11 01:35:00.246181 | orchestrator | ok 2025-03-11 01:35:00.259205 | 2025-03-11 01:35:00.259376 | TASK [Set cloud fact (local deployment)] 2025-03-11 01:35:00.304808 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:00.321590 | 2025-03-11 01:35:00.321732 | TASK [Clean the cloud environment] 2025-03-11 01:35:01.134626 | orchestrator | 2025-03-11 01:35:01 - clean up servers 2025-03-11 01:35:02.048698 | orchestrator | 2025-03-11 01:35:02 - testbed-manager 2025-03-11 01:35:02.138253 | orchestrator | 2025-03-11 01:35:02 - testbed-node-1 2025-03-11 01:35:02.226480 | orchestrator | 2025-03-11 01:35:02 - testbed-node-2 2025-03-11 01:35:02.313133 | orchestrator | 2025-03-11 01:35:02 - testbed-node-0 2025-03-11 01:35:02.406663 | orchestrator | 2025-03-11 01:35:02 - testbed-node-4 2025-03-11 01:35:02.503474 | orchestrator | 2025-03-11 01:35:02 - testbed-node-3 2025-03-11 01:35:02.605952 | orchestrator | 2025-03-11 01:35:02 - testbed-node-5 2025-03-11 01:35:02.696261 | orchestrator | 2025-03-11 01:35:02 - clean up keypairs 2025-03-11 01:35:02.715942 | orchestrator | 2025-03-11 01:35:02 - testbed 2025-03-11 01:35:02.745849 | orchestrator | 2025-03-11 01:35:02 - wait for servers to be gone 2025-03-11 01:35:14.030518 | orchestrator | 2025-03-11 01:35:14 - clean up ports 2025-03-11 01:35:14.242709 | orchestrator | 2025-03-11 01:35:14 - 3673f77f-c40a-4761-8db8-c80b33cd541e 2025-03-11 01:35:14.498520 | orchestrator | 2025-03-11 01:35:14 - 3bea2d1d-9f70-4f20-9262-be62221bfc6d 2025-03-11 01:35:14.688418 | orchestrator | 2025-03-11 01:35:14 - 42c14717-af58-4b05-9230-29c3be21d28d 2025-03-11 01:35:15.028540 | orchestrator | 2025-03-11 01:35:15 - 9a6747c6-a514-40b7-bac4-e7abddd476af 2025-03-11 01:35:15.286114 | orchestrator | 2025-03-11 01:35:15 - b5734ee6-4680-4770-bf07-c8f0dc30f574 2025-03-11 01:35:15.536311 | orchestrator | 2025-03-11 01:35:15 - c791deea-c6bc-4e0a-85cd-3be1d4b3d78c 2025-03-11 01:35:15.746136 | orchestrator | 2025-03-11 01:35:15 - df778e65-1dfe-436c-9042-a5f37b778f23 2025-03-11 01:35:15.935011 | orchestrator | 2025-03-11 01:35:15 - clean up volumes 2025-03-11 01:35:16.087764 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-manager-base 2025-03-11 01:35:16.124322 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-0-node-base 2025-03-11 01:35:16.169562 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-1-node-base 2025-03-11 01:35:16.207531 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-2-node-base 2025-03-11 01:35:16.249741 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-3-node-base 2025-03-11 01:35:16.289706 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-5-node-base 2025-03-11 01:35:16.329333 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-3-node-3 2025-03-11 01:35:16.369570 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-8-node-2 2025-03-11 01:35:16.409843 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-5-node-5 2025-03-11 01:35:16.447757 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-16-node-4 2025-03-11 01:35:16.492010 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-10-node-4 2025-03-11 01:35:16.532811 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-4-node-base 2025-03-11 01:35:16.571892 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-14-node-2 2025-03-11 01:35:16.611119 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-0-node-0 2025-03-11 01:35:16.649952 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-6-node-0 2025-03-11 01:35:16.689058 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-12-node-0 2025-03-11 01:35:16.731797 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-15-node-3 2025-03-11 01:35:16.773518 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-17-node-5 2025-03-11 01:35:16.815152 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-1-node-1 2025-03-11 01:35:16.853181 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-13-node-1 2025-03-11 01:35:16.902221 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-2-node-2 2025-03-11 01:35:16.940131 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-7-node-1 2025-03-11 01:35:16.983175 | orchestrator | 2025-03-11 01:35:16 - testbed-volume-11-node-5 2025-03-11 01:35:17.025819 | orchestrator | 2025-03-11 01:35:17 - testbed-volume-9-node-3 2025-03-11 01:35:17.067415 | orchestrator | 2025-03-11 01:35:17 - testbed-volume-4-node-4 2025-03-11 01:35:17.108679 | orchestrator | 2025-03-11 01:35:17 - disconnect routers 2025-03-11 01:35:17.161468 | orchestrator | 2025-03-11 01:35:17 - testbed 2025-03-11 01:35:17.840022 | orchestrator | 2025-03-11 01:35:17 - clean up subnets 2025-03-11 01:35:17.873817 | orchestrator | 2025-03-11 01:35:17 - subnet-testbed-management 2025-03-11 01:35:17.999295 | orchestrator | 2025-03-11 01:35:17 - clean up networks 2025-03-11 01:35:18.164230 | orchestrator | 2025-03-11 01:35:18 - net-testbed-management 2025-03-11 01:35:18.412704 | orchestrator | 2025-03-11 01:35:18 - clean up security groups 2025-03-11 01:35:18.451982 | orchestrator | 2025-03-11 01:35:18 - testbed-node 2025-03-11 01:35:18.549468 | orchestrator | 2025-03-11 01:35:18 - testbed-management 2025-03-11 01:35:18.635886 | orchestrator | 2025-03-11 01:35:18 - clean up floating ips 2025-03-11 01:35:18.669428 | orchestrator | 2025-03-11 01:35:18 - 81.163.192.198 2025-03-11 01:35:19.054274 | orchestrator | 2025-03-11 01:35:19 - clean up routers 2025-03-11 01:35:19.102201 | orchestrator | 2025-03-11 01:35:19 - testbed 2025-03-11 01:35:20.888283 | orchestrator | changed 2025-03-11 01:35:20.934585 | 2025-03-11 01:35:20.934687 | PLAY RECAP 2025-03-11 01:35:20.934744 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-11 01:35:20.934770 | 2025-03-11 01:35:21.055958 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-11 01:35:21.059204 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-11 01:35:21.775828 | 2025-03-11 01:35:21.775975 | PLAY [Base post-fetch] 2025-03-11 01:35:21.805992 | 2025-03-11 01:35:21.806128 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-11 01:35:21.883596 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:21.901298 | 2025-03-11 01:35:21.901494 | TASK [fetch-output : Set log path for single node] 2025-03-11 01:35:21.965949 | orchestrator | ok 2025-03-11 01:35:21.974731 | 2025-03-11 01:35:21.974841 | LOOP [fetch-output : Ensure local output dirs] 2025-03-11 01:35:22.497698 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/logs" 2025-03-11 01:35:22.814428 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/artifacts" 2025-03-11 01:35:23.088921 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a8d42800f1c548199d1fbe7f4c48adb3/work/docs" 2025-03-11 01:35:23.120031 | 2025-03-11 01:35:23.120290 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-11 01:35:23.982504 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:35:23.983186 | orchestrator | changed: All items complete 2025-03-11 01:35:23.983303 | 2025-03-11 01:35:24.604303 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:35:25.211963 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:35:25.247683 | 2025-03-11 01:35:25.247967 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-11 01:35:25.296716 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:25.304164 | orchestrator | skipping: Conditional result was False 2025-03-11 01:35:25.361589 | 2025-03-11 01:35:25.361732 | PLAY RECAP 2025-03-11 01:35:25.361799 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-11 01:35:25.361828 | 2025-03-11 01:35:25.504858 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-11 01:35:25.513446 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-11 01:35:26.202875 | 2025-03-11 01:35:26.203498 | PLAY [Base post] 2025-03-11 01:35:26.233273 | 2025-03-11 01:35:26.233406 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-11 01:35:27.136135 | orchestrator | changed 2025-03-11 01:35:27.175207 | 2025-03-11 01:35:27.175352 | PLAY RECAP 2025-03-11 01:35:27.175418 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-11 01:35:27.175484 | 2025-03-11 01:35:27.292834 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-11 01:35:27.301395 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-11 01:35:28.052698 | 2025-03-11 01:35:28.052849 | PLAY [Base post-logs] 2025-03-11 01:35:28.068987 | 2025-03-11 01:35:28.069109 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-11 01:35:28.583478 | localhost | changed 2025-03-11 01:35:28.590881 | 2025-03-11 01:35:28.591070 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-11 01:35:28.636445 | localhost | ok 2025-03-11 01:35:28.646631 | 2025-03-11 01:35:28.646764 | TASK [Set zuul-log-path fact] 2025-03-11 01:35:28.667732 | localhost | ok 2025-03-11 01:35:28.684184 | 2025-03-11 01:35:28.684472 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-11 01:35:28.719677 | localhost | skipping: Conditional result was False 2025-03-11 01:35:28.726625 | 2025-03-11 01:35:28.726784 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-11 01:35:28.770392 | localhost | ok 2025-03-11 01:35:28.775879 | 2025-03-11 01:35:28.776024 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-11 01:35:28.823096 | localhost | skipping: Conditional result was False 2025-03-11 01:35:28.831898 | 2025-03-11 01:35:28.832080 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-11 01:35:28.858957 | localhost | skipping: Conditional result was False 2025-03-11 01:35:28.866552 | 2025-03-11 01:35:28.866740 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-11 01:35:28.893477 | localhost | skipping: Conditional result was False 2025-03-11 01:35:28.902545 | 2025-03-11 01:35:28.902729 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-11 01:35:28.940845 | localhost | skipping: Conditional result was False 2025-03-11 01:35:28.954009 | 2025-03-11 01:35:28.954158 | TASK [upload-logs : Create log directories] 2025-03-11 01:35:29.518897 | localhost | changed 2025-03-11 01:35:29.526366 | 2025-03-11 01:35:29.526513 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-11 01:35:30.046230 | localhost -> localhost | ok: Runtime: 0:00:00.007395 2025-03-11 01:35:30.051786 | 2025-03-11 01:35:30.051916 | TASK [upload-logs : Upload logs to log server] 2025-03-11 01:35:30.702622 | localhost | Output suppressed because no_log was given 2025-03-11 01:35:30.709310 | 2025-03-11 01:35:30.709476 | LOOP [upload-logs : Compress console log and json output] 2025-03-11 01:35:30.814100 | localhost | skipping: Conditional result was False 2025-03-11 01:35:30.834307 | localhost | skipping: Conditional result was False 2025-03-11 01:35:30.841037 | 2025-03-11 01:35:30.841147 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-11 01:35:30.914038 | localhost | skipping: Conditional result was False 2025-03-11 01:35:30.914669 | 2025-03-11 01:35:30.926634 | localhost | skipping: Conditional result was False 2025-03-11 01:35:30.946263 | 2025-03-11 01:35:30.946482 | LOOP [upload-logs : Upload console log and json output]